doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.01660 | 35 | Pythia-12b Pythia databricks/dolly -v2-12b 12b Trained on the Databricks machine learning platform
# Evaluating ToM performance
Using the ToM materials described above, for each trial, we concatenated the statement and the corresponding question and fed them into the model. From the model output, we obtained the modelâs prediction of the next word by examining the output logits of all possible tokens. These logits are monotonically related to the probability of the upcoming word predicted by the model. Then we specifically examined the logits of the two possible answers for the belief and fact questions. To determine the LLMsâ answer, we chose the word with the highest logit value out of the two word choices (e.g. floor or counter) to ensure the selection of more reliable predictions and avoid instances where certain models generate irrelevant outputs. Then the accuracies of each model were calculated for true beliefs, false beliefs, and facts by considering all trials with corresponding questions. The same procedures were followed for the two control conditions described above to further verify our findings.
# Hidden embeddingsâ selectivity for true- or false- belief trials | 2309.01660#35 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 36 | For each LLM, we tested their ToM capacity by extracting the modelâs hidden embeddings from all layers along with the predicted logit for the upcoming token during the task. Specifically, for each trial, we concatenated the statements and the questions we presented to the human participants as a single input to the LLM model (Table 1, Fig. 2a). The hidden embeddings were obtained from the output of each transformer module, in addition to the one that was input to the first transformer module for each model. The dimension of the returned embeddings for each model in this step was trials x words x nodes x layers, where words included those from both the statement and the question, and nodes referred to the embedding size of a layer. Following a comparable approach employed to evaluate ToM-related activities from single neurons in the human brain, we used the embeddings corresponding to the question tokens and subsequently calculated the average values across all question tokens (dimension of trials x nodes x layers). We then performed statistical tests to evaluate whether each embedding (looping over nodes and layers) exhibited significant responses to trial conditions (i.e., true-belief and false-belief). Particularly, we compared the | 2309.01660#36 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 38 | ð1 = ð1ð2 + ð2 = ð1ð2 + ð1(ð1 + 1) 2 ð2(ð2 + 1) 2 â ð
1 â ð
2
where ð
1 and ð
2 are the sum of the ranks for group 1 and 2 respectively. We used a threshold of 0.05 to determine whether a given dimension of an embedding demonstrated significant association to the trial category. Next, we grouped the embeddings based on their layers and models, and calculated the percentage of embeddings that showed higher than chance responsiveness. We examined all embeddings across different layers within a given LLM, then for each model, we selected the layer with the highest percentage of responsive embeddings as
the percentage of that model. All steps described here were repeated for the control experiments with the random permuted words in the statements described above for further verification.
# Decoding the trial type using the population of embeddings | 2309.01660#38 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 39 | # Decoding the trial type using the population of embeddings
In order to examine whether there is a causal relationship between the observed selectivity of the embeddings and model performance, we conducted decoding analysis using the entire population of embeddings of each layer. Specifically, for each layer, from the embeddings with the dimension of trials x words x nodes, we averaged across question tokens for each trial for a given layer of each model, resulting in the dimension of trials x nodes. We considered nodes as the equivalent of neurons in the brain to predict the type of trials as the target variable. We used 75% training and 25% testing split based on the pair of trials, so that trials within a pair were not separated into two datasets. We used a logistic regression classifier with L2 regularization of ð¶ = 1, which minimize the cross-entropy loss with the penalty of the square of the parameter values:
ð min ð¶ (â(âð¦ð log(ðÌ(Xð)) â (1 â ð¦ð) log(1 â ðÌ(Xð))) ) + 1 2 âð¤â2 ð=1 | 2309.01660#39 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 40 | where the target variable ð¦ð belongs to the set {0, 1} for data point ð, and ð¤ is the weight.
For each layer of a given LLM, we performed the same analysis 100 times with different train and test splits, and calculated the average accuracies across these iterations. At the end, the decoding accuracy of each model was calculated by taking the average over all layers from the model. As a control, we repeated the same procedures for the same layer, but using the ToM materials with the randomly permuted words in the statements.
# Acknowledgement
We are grateful to Yuedong Fang, Yue Cao and Nikola Bolt for their comments to improve the manuscript, and Douglas Kellar for facilitating access to computational resources. We acknowledge the utilization of ChatGPT for assistance in refining the wording of this manuscript (38). Z.M.W. is supported by NIH U01NS123130.
# Code availability
All codes will be made publicly available on GitHub when the manuscript is accepted for publication. | 2309.01660#40 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 41 | # Code availability
All codes will be made publicly available on GitHub when the manuscript is accepted for publication.
A. Example of the Theory of Mind (ToM) material Statement: Evaluation True belief trial Bolot questi (Agent's view same as reality) jenel question: LLM predicts - â+ probability (logit) False belief trial âact question: of next word (Agents view different from reality) B. Large language model (LLM) performance C. Large language model (LLM) performance on false belief trials by model size Boliof question: True belief Belief question: False beliof Facts 10 104 10 Mm octun © tandom permuing words i statement âquestion onty I J accuracy improvement GPT2.m. GPr2 model size (x 10°) | 2309.01660#41 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 42 | Figure 1. Theory of Mind capability in various large language models. A. ToM tasks comprising statements and questions were input to each LLM, and the predicted upcoming word and probabilities were examined (Method). The ToM trials were either true- or false-belief, depending on whether the agentâs viewpoint was aligned with the reality or not. Besides, we assessed the modelsâ ability to answer questions about the factual state of reality provided by the statements. B. Model performance on questions of true-belief (left), false-belief (middle), and facts trials (right). For control experiments, we randomly permutated words within the statements and input these shuffled words along with questions to the models and repeated the same model evaluation procedures. We also assessed modelsâ performance on questions-only trials without the statements to evaluate impact of factors unrelated to the context provided by the statements. C. LLMsâ accuracies in answering false-belief questions and their dependency on the size of the models. We plotted the accuracy improvement resulting from inputting statements and questions compared to accuracy from only inputting questions, across different LLMs. | 2309.01660#42 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 43 | A. Example of the Theory of Mind (ToM) material Statement Belief question False belief (FB) trial True belief (TB) trial LLM Hidden embeddings Statements and questions Evaluation Questions FB TB Mann Whiteney U test HO: same distribution B. Example of an embedding C. Embeddings in Falcon-40b show selective response to true and false belief trials responding to trial type Falcon-40b, layer 23 © Falcon-40b, layer 5 ° Falcon-40b, layer 25 © Falcon-40b, layer 45 8 05 7 3 0.5 7 ~ 3 05 = FB > > » |= 78 NB Ag xg ⬠Po oO} P20 Po 0 =] so se $e 8 oe oe b oe F S gr $F S$ oil 0 £ 055â> 95 £ 2% 7 ae = eee: -25 0.0 25 6 705 0 05 5 05 0 05 § 05 () 0s Embedding value embedding z-score embedding z-score embedding z-score False belief False belief False belief D. Percentage of embeddings responding significantly to true or false beliefs 25 7 LLaMa-13b-3 LLaMa-30b-1 Falcon-40 vets 2 a § be . © 5 ° 32 z54° | 2309.01660#43 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 44 | false beliefs 25 7 LLaMa-13b-3 LLaMa-30b-1 Falcon-40 vets 2 a § be . © 5 ° 32 z54° â: 3 $ 2 4 8 15 8 . 2 g, 2 2 40 0 2 . ro P 3S . 5 10 layer 3 2 & g £ | § 5 actual ; 51 : 5 Random permuting words in statements 2 a me Fi a a 0 if & ® ° 0.2 0.1 0.0 04 0.2 Te2Rxesrerte ere 2 - mi fe2R8R8 283833233 8 ; pease ee et ee ee eseis Model accuracy above chance on false belief trials o sOR SRS GE SzESHRES i i a a a a a a5 355 a4'3 4 | 2309.01660#44 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 45 | Figure 2. Responding embeddings to true- versus false-belief trials. A. To investigate whether hidden embeddings exhibit selective modulations to true versus false beliefs and to compare the artificial embeddings with human single neurons, we employed similar ToM tasks as those previously tested on humans. For each trial, the statement and the question were concatenated and input to the LLMs, obtaining hidden embeddings from all words and layers (Methods). We then excluded embeddings from words during the statement and computed average values for words within the question. A Mann Whitney U test was conducted to examine whether a hidden embedding exhibited significant difference between false- and true-belief trials. B. Distributions of embedding values from Falcon-40b layer 23 to illustrate that the activities were significantly different for false-belief trials than for true-belief trials. C. Examples from different layers of the Falcon-40b model show the average embedding values over true- and false-belief trials. Each dot represents a dimension of the hidden embedding; orange dots indicate the embedding with significant differences between trial types, while the gray dots indicate no significance. D. The percentage of embedding dimensions significantly | 2309.01660#45 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 46 | orange dots indicate the embedding with significant differences between trial types, while the gray dots indicate no significance. D. The percentage of embedding dimensions significantly selective to true and false beliefs varies across models and layers (left), with the Falcon-40b model demonstrating the highest percentage. These results are compared to the percentage of single neurons in the human brain (light green). The percentages across layers of three example models are shown in the insets. The percentages of significant embeddings across different models were found to be dependent on the false-belief trial accuracy (right). | 2309.01660#46 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 47 | A. Decoding trial types from hidden embeddings B. Trial-type decoding results across models True-belief if 1.0 » Decoded Fesetelef © Falcon-40b, layer 25 ey mmm actual 4 ° meme Random permuting words in statements a 08 Bo 81 a ° ® decoding accuracy ° FS ° iS Absoluate difference between TB and FB 0.0 Ex fre gezgpee*agepys Tf 5a 2 BR be BN BEES Sr EBs 8a GE 88 8B BT °° oS ea 22228 225533 5 correct trials â_incorrect trials 3 3 eae ° s°ftz ef ze23R8 828 8 8 a a 3° 8s 554 a3 a3a3 | 2309.01660#47 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 48 | Figure 3. Decoding trial types using hidden embeddings. A. Using Falcon-40b as an example, higher probabilities of the correct trial type were observed from most observations decoded from all embeddings at layer 25 (top). The selected embeddings showed greater difference between true- and false-belief trials in correctly decoded trials compared to the incorrectly decoded trials (bottom). B. Across different models, large models generally demonstrated higher decoding accuracy for true- and false-belief trials using all embeddings from each layer. In contrast, decoding accuracies remained consistently low when the words in the statements were randomly permuted before inputting them into the LLMs.
# Reference:
2. 3.
7. 8. 9. | 2309.01660#48 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 49 | N. Aggarwal, G. J. Saxena, S. Singh, A. Pundir, Can I say, now machines can think? arXiv preprint arXiv:2307.07526, (2023). M. Sallam, in Healthcare. (MDPI, 2023), vol. 11, pp. 887. J. He-Yueya, G. Poesia, R. E. Wang, N. D. Goodman, Solving math word problems by combining language models with symbolic solvers. arXiv preprint arXiv:2304.09102, (2023). Z. Yuan, H. Yuan, C. Tan, W. Wang, S. Huang, How well do Large Language Models perform in Arithmetic tasks? arXiv preprint arXiv:2304.02015, (2023). L. Pan, A. Albalak, X. Wang, W. Y. Wang, Logic-lm: Empowering large language models with symbolic solvers for faithful logical reasoning. arXiv preprint arXiv:2305.12295, (2023). S. Yao et al., Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint | 2309.01660#49 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 51 | 10. M. Kosinski, Theory of mind may have spontaneously emerged in large language
models. arXiv preprint arXiv:2302.02083, (2023). T. Ullman, Large language models fail on trivial alterations to theory-of-mind tasks. arXiv preprint arXiv:2302.08399, (2023). S. Trott, C. Jones, T. Chang, J. Michaelov, B. Bergen, Do Large Language Models know what humans know? Cognitive Science 47, e13309 (2023).
13. M. C. Frank, Baby steps in evaluating the capacities of large language models. Nature | 2309.01660#51 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 52 | 13. M. C. Frank, Baby steps in evaluating the capacities of large language models. Nature
Reviews Psychology, 1-2 (2023). H. M. Wellman, D. Cross, J. Watson, Metaâanalysis of theoryâofâmind development: The truth about false belief. Child development 72, 655-684 (2001). H. Wimmer, J. Perner, Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children's understanding of deception. Cognition 13, 103-128 (1983). K. Milligan, J. W. Astington, L. A. Dack, Language and theory of mind: Metaâanalysis of the relation between language ability and falseâbelief understanding. Child development 78, 622-646 (2007). V. E. Stone, S. Baron-Cohen, R. T. Knight, Frontal lobe contributions to theory of mind. Journal of cognitive neuroscience 10, 640-656 (1998).
18. M. Siegal, R. Varley, Neural systems involved in'theory of mind'. Nature Reviews | 2309.01660#52 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 53 | 18. M. Siegal, R. Varley, Neural systems involved in'theory of mind'. Nature Reviews
Neuroscience 3, 463-471 (2002). R. Saxe, N. Kanwisher, People thinking about thinking people: the role of the temporo- parietal junction in âtheory of mindâ. Neuroimage 19, 1835-1842 (2003). R. Saxe, L. J. Powell, It's the thought that counts: specific brain regions for one component of theory of mind. Psychological science 17, 692-699 (2006). G. Penedo et al., The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, (2023). E. Almazrouei et al. (2023).
20.
21.
22.
23,
24.
25.
26. 27.
28.
29.
30.
31. | 2309.01660#53 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 54 | H. Touvron et al., LLaMA: open and efficient foundation language models, 2023. URL https://arxiv. org/abs/2302.13971. S. Biderman et al., in International Conference on Machine Learning. (PMLR, 2023), pp. 2397-2430. A. Radford et al., Language models are unsupervised multitask learners. OpenAI blog 1, 9 (2019). E. Beeching et al., Open LLM Leaderboard. Hugging Face, (2023). U. Frith, F. Happé, Theory of mind and selfâconsciousness: What is it like to be autistic? Mind & language 14, 82-89 (1999). J. Perner, Z. Dienes, Developmental aspects of consciousness: How much theory of mind do you need to be consciously aware? Consciousness and cognition 12, 63-82 (2003). J. I. Carpendale, C. Lewis, Constructing an understanding of mind: The development of children's social understanding within social interaction. Behavioral and brain sciences 27, 79-96 (2004). C. Lewis, N. H. Freeman, C. Kyriakidou, K. MaridakiâKassotaki, D. M. Berridge, Social | 2309.01660#54 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 55 | C. Lewis, N. H. Freeman, C. Kyriakidou, K. MaridakiâKassotaki, D. M. Berridge, Social influences on false belief access: Specific sibling influences or general apprenticeship? Child development 67, 2930-2947 (1996). T. Wolf et al., Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, (2019). X. a. L. Geng, Hao. (2023). https://huggingface.co/ausboss/llama-13b-supercot. | 2309.01660#55 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 56 | 32. 33. 34. W.-L. Chiang et al., Vicuna: An open-source chatbot impressing gpt-4 with 90%*
chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023), (2023). https://huggingface.co/openaccess-ai-collective/wizard-mega-13b. https://huggingface.co/elinas/chronos-33b.
35. 36. 37. M. Conover et al., Free dolly: Introducing the worldâs first truly open instruction-tuned
37. M. Conover et al., Free dolly: Introducing the worldâs first truly open instruction-tuned Ilm. (2023).
# llm. (2023). ChatGPT, https://chat.openai.com/chat.
38. | 2309.01660#56 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01219 | 1 | # Sirenâs Song in the AI Ocean:
A Survey on Hallucination in Large Language Models Yue Zhangâ â, Yafu Li⢠, Leyang Cuiâ¡â , Deng Caiâ¡ , Lemao Liuâ¡ Tingchen Fuâ , Xinting Huangâ¡ , Enbo Zhaoâ¡ , Yu Zhangâ , Yulong Chen⢠Longyue Wangâ¡ , Anh Tuan Luuâ¼ , Wei Biâ¡ , Freda Shi , Shuming Shiâ¡ â¡ Tencent AI lab â Soochow University â¢Zhejiang University â Renmin University of China â¼ Nanyang Technological University Toyota Technological Institute at Chicago
# Abstract | 2309.01219#1 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 2 | # Abstract
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated con- text, or misaligns with established world knowledge. This phenomenon poses a sub- stantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, ex- planation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing ap- proaches aiming at mitigating LLM halluci- nation, and discuss potential directions for future research.
User Input Can you recommend a delicious recipe for 5 dinner? LLM Response Yes, here is a delicious recipe for lunch. So how about fried chicken with mashed potatoes? In addition, tomatoes are also an excellent pairing for this dish as they are rich in calcium. Enjoy this steak! Hallucination Explanation Input-Conflicting Hallucination: the user wants a recipe for dinner while LLM provide one for lunch. Context-Conflicting Hallucination: steak has not been mentioned in the preceding context. Fact-Conflicting Hallucination: tomatoes are not rich in calcium in fact.
Figure 1: Three types of hallucinations occurred in LLM responses (best viewed in color). | 2309.01219#2 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 3 | Figure 1: Three types of hallucinations occurred in LLM responses (best viewed in color).
# Introduction
Large language models (LLMs), particularly char- acterized by their substantial number of param- eters, have arisen as a promising cornerstone for the development of natural language pro- cessing (NLP) and artificial intelligence (Zhao et al., 2023c). With proper alignment techniques, such as supervised finetuning (SFT; Zhang et al., 2023b) and reinforcement learning from human feedback (RLHF; Ouyang et al., 2022; Fernan- des et al., 2023), recent LLMs (OpenAI, 2023a; Touvron et al., 2023b; OpenAI, 2023b, inter alia) have exhibited strong capabilities in solving vari- ous downstream tasks.
Nonetheless, as exemplified in Figure 1, LLMs, despite their remarkable success, occasionally
â This survey paper was completed during Yue Zhang ([email protected]), Yafu Li, Tingchen Fu, and Yu Zhangâs internships at Tencent AI Lab.
# â Corresponding author ([email protected]). | 2309.01219#3 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 4 | # â Corresponding author ([email protected]).
produce outputs that, while seemingly plausible, deviate from user input (Adlakha et al., 2023), pre- viously generated context (Liu et al., 2022), or fac- tual knowledge (Min et al., 2023; Muhlgay et al., 2023; Li et al., 2023a)âthis phenomenon is com- monly referred to as hallucination, which signifi- cantly undermines the reliability of LLMs in real- world scenarios (Kaddour et al., 2023). For in- stance, LLMs can potentially fabricate erroneous medical diagnoses or treatment plans that lead to tangible real-life risks (Umapathi et al., 2023).
While hallucination in conventional natural lan- guage generation (NLG) settings has been widely studied (Ji et al., 2023), understanding and ad- dressing the hallucination problem within the realm of LLMs encounters unique challenges in- troduced by 1. Massive training data: | 2309.01219#4 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 5 | in contrast to care- fully curating data for a specific task, LLM preDefinition (Sec. 2) Input-Conflicting Hallucination Benchmark (Sec. 3) Input-Conflicting Benchmark: BEGIN, QMSum, Context-Conflicting Hallucination Context-Conflicting Benchmark: HADES... Fact-Conflicting Hallucination Fact-Conflicting Benchmark: TruthfulQA,,FActScore, FENMT,FEQA... HaluEval, FACTOR... TimeLine Pre-trainin fi in Parametric Memorization --â-â nny ar Curating Training Data a , Zâ ee Honesty-oriented SFT Overinflated SETA Self-confidence â7 ae â Honesty-oriented RL a - â - - - - . ing Ali a U7 Misleading Alignment = <~__ Ls â Decoding Strategy RLHF a L âoO _-â Knowledge Retrieve 7 oT Generation-time Risk ssn Yeo Inference o-7 Le a Exploiting Uncertainty Sources (Sec. 4) Mitigation (Sec. 5) | 2309.01219#5 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 6 | Figure 2: The overview structure of this paper: We initially categorize LLM hallucinations into three distinct types and then introduce corresponding evaluation benchmarks. Subsequently, we explore the source of hallucinations and discuss mitigation strategies throughout the life cycle of LLMs (pre-trainingâSFTâRLHFâinference).
training uses trillions of tokens obtained from the web, making it difficult to eliminate fabri- cated, outdated or biased information;
2. Versatility of LLMs: general-purpose LLMs are expected to excel in cross-task, cross- lingual, and cross-domain settings, posing challenges for comprehensive evaluation and mitigation of hallucination.
3. Imperceptibility of errors: as a byproduct of their strong abilities, LLMs may generate false information that initially seems highly plausi- ble, making it challenging for models or even humans to detect hallucination.
tioned challenges, which strongly motivates us to compile this survey. | 2309.01219#6 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 7 | tioned challenges, which strongly motivates us to compile this survey.
We organize this paper as follows, as also introduce the depicted in Figure 2. We first background of LLMs and offer our definition of hallucination in LLMs (§2). Next, we introduce relevant benchmarks and metrics (§3). Subse- quently, we discuss potential sources of LLM hal- lucinations (§4), and provide an in-depth review of recent work towards addressing the problem (§5). Finally, we present forward-looking perspectives (§6). We will consistently update the related open-source materials, which can be accessed at https://github.com/HillZhang1999/ llm-hallucination-survey.
In addition, the RLHF process (Ouyang et al., 2022), the vague knowledge boundary (Ren et al., 2023) and the black-box property of LLMs (Sun et al., 2022) also complicate the detection, expla- nation, and mitigation of hallucination in LLMs. There has been a notable upsurge in cutting-edge research dedicated to addressing the aforemen# 2 Hallucination in the Era of LLM
We begin this section by overviewing the history of LLMs (§2.1). Next, we present our defini- tion of LLM hallucination, by breaking it down
2 | 2309.01219#7 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 8 | We begin this section by overviewing the history of LLMs (§2.1). Next, we present our defini- tion of LLM hallucination, by breaking it down
2
into three sub-categories (§2.2). In addition, we discuss the unique challenges of hallucination in LLMs (§2.3), and compare hallucination with other prevalent problems that are frequently en- countered in the realm of LLMs (§2.4).
# 2.1 Large Language Models | 2309.01219#8 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 9 | An important category of LLMs is autoregressive language models (Radford et al., 2019; Chowd- inter hery et al., 2022; Touvron et al., 2023a, alia). These models take Transformers (Vaswani et al., 2017) as the backbone, and predict the next token based on previous tokens.1 Prior to the widespread adoption of Transformers, autoregres- sive language models were built on the backbones of n-grams (Bickel et al., 2005; Pauls and Klein, 2011) and recurrent neural networks (Mikolov et al., 2010), and have been applied to various NLG tasks such as summarization (Nallapati et al., 2017) and dialogue generation (Chen et al., 2017). Transformer-based LLMs have demonstrated exceptional performance across tasks, and have therefore shifted NLP from a paradigm centered on task-specific solutions to general-purpose pre- training (Devlin et al., 2019; Radford et al., 2019). The pretrained models are optimized on various self-supervision objectives (Devlin et al., 2019; inter Raffel et al., 2020; Lewis et al., 2020a, alia), using large-scale unlabeled corpora. Sub- | 2309.01219#9 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 10 | objectives (Devlin et al., 2019; inter Raffel et al., 2020; Lewis et al., 2020a, alia), using large-scale unlabeled corpora. Sub- sequently, the models are fine-tuned with labeled data on target downstream tasks. Representations from the pretrained models can typically reduce the demand for annotated data and achieve sig- nificant performance improvement across down- stream tasks (Qiu et al., 2020; Min et al., 2021; Li et al., 2022b, inter alia). | 2309.01219#10 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 11 | In addition to performance improvement on downstream tasks, recent work has found that scal- ing up pretrained language modelsâboth in terms of model parameter count and the volume of pre- training dataâenables some remarkable abilities, including in-context learning (Brown et al., 2020), reasoning (Wei et al., 2022), and instruction fol- lowing (Ouyang et al., 2022). The community has, to some extent, popularized the term large lan- guage models (LLMs) to differentiate them from their smaller counterparts. Notably, LLMs exhibit the potential to accurately comprehend human in- structions and efficiently tackle a variety of com1Another variant of language models predicts masked to- kens in a corrupted sequence (Devlin et al., 2019; Liu et al., 2019; Lan et al., 2019, inter alia).
3
plex tasks with only minimal or even no supervi- sion (OpenAI, 2023a,b; Touvron et al., 2023b).
# 2.2 What is LLM Hallucination | 2309.01219#11 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 12 | # 2.2 What is LLM Hallucination
While LLMs have demonstrated remarkable per- formances, they still inevitably encounter different problems in practical applications, where halluci- nation is one of the most significant issues among them. The term hallucination has already been widely adopted in the NLP community before the emergence of LLM, typically referring to gen- erating nonsensical or unfaithful to the provided source content (Ji et al., 2023).
We argue that the definition appears to have considerably expanded due to the versatility of LLMs. To this end, we categorize hallucination within the context of LLMs as follows:
⢠Input-conflicting hallucination, where LLMs generate content that deviates from the source input provided by users;
⢠Context-conflicting hallucination, where LLMs generate content that conflicts with previously generated information by itself;
⢠Fact-conflicting hallucination, where LLMs generate content that is not faithful to estab- lished world knowledge.
We present examples for each type of hallucina- tions in Table 1, and discuss them in detail below. | 2309.01219#12 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 13 | We present examples for each type of hallucina- tions in Table 1, and discuss them in detail below.
Input-conflicting hallucination. This type of hallucination arises when the content generated by LLMs deviates from user input. Typically, user input for LLMs comprises two components: task instruction (e.g., user prompt for summarization) and task input (e.g., document to be summarized). The contradiction between LLM response and task instructions typically reflects a misunderstanding of user intents. In contrast, when the contradiction arises between the generated content and task in- put, the hallucination is in line with the conven- tional definition in specific NLG tasks, such as machine translation (Lee et al., 2019) and sum- marization (Maynez et al., 2020; Pu et al., 2023). For instance, the first example in Table 1 appears to highlight a contradiction between the gener- ated content and task input: when users request the LLM to generate a summary, the LLM incor- rectly replaces the personâs name in its response (HillâLucas), even though the general form can indeed be perceived as a suitable summary. | 2309.01219#13 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 14 | Type Turn Example Input-conflicting Hallucination User Please summarize the following content: My friend Hill and I love play- ing basketball together. We spend hours on the court, dribbling, shoot- ing, and competing against each other. Our passion for this game brings us closer as good friends. Model Lucas and I love playing basketball and this makes us good friends. Context-conflicting Hallucination User Model Tell me something about the NBA Commissioner. In the worldâs most popular basketball league, the NBA, Commissioner Adam Silver has long been regarded as a leader and decision-maker. Firstly, Silver has claimed his dedication to maintaining fairness and competitiveness in the game. Secondly, Silver has repeatedly empha- sized his concern for player health and well-being. In this case, Sternâs words and actions clash with his stated goal of caring for player welfare. Fact-conflicting Hallucination User Who was the mother of Afonso II, the third king of Portugal? Model The mother of Afonso II was Queen Urraca of Castile. | 2309.01219#14 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 15 | Table 1: Illustrations of the three types of LLM hallucinations that we defined. For input-conflicting hallucination, the LLM makes a mistake in the person name (HillâLucas) during summarizing. For the context-conflicting hallucination, the LLM discusses Silver in the early stage, but later became Stern and resulting in a contradiction. For the fact-conflicting hallucination, LLMs said the mother of Afonso II was Queen Urraca of Castile, while the correct answer is Dulce Berenguer of Barcelona.
hallucination. LLMs Context-conflicting may exhibit self-contradictions when generat- ing lengthy or multi-turn responses. This type of hallucination arises when LLMs lose track to maintain consistency of the context or fail the conversation, potentially due throughout to their limitations in maintaining long-term memory (Liu et al., 2023d) or identifying relevant context (Shi et al., 2023a). The second example in Table 1 demonstrates how a user request to introduce the NBA Commissioner leads to a Specifically, context-conflicting hallucination. the LLM initially introduces Silver (the current NBA commissioner), but later refers to Stern (the former NBA commissioner), demonstrating a lack of consistency in the generation. | 2309.01219#15 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 16 | The focus of recent hallucination re- search in LLMs is predominantly on fact- conflicting hallucination, despite the importance Possible reasons in- of the other two types. clude but not limited to: (1) input- and context- conflicting hallucinations have been extensively studied in conventional NLG settings (Ji et al., 2023). However, fact-conflicting hallucination poses more complex challenges in LLMs due to the absence of an authoritative knowledge source as a reference; (2) fact-conflicting hallucinations tend to have more side effects on the practical ap- plications of LLMs, leading to a greater emphasis in recent studies. In light of this research status, the following sections of our paper will primar- ily concentrate on the fact-conflicting hallucina- tions, and we will explicitly highlight them when addressing the other two types of hallucinations. | 2309.01219#16 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 17 | Fact-conflicting hallucination. This type of hallucination occurs when LLMs generate infor- mation or text that contradicts established world knowledge. The source of fact-conflicting halluci- nations can be multifarious and introduced at dif- ferent stages of the LLM life cycle, as shown in Figure 2. We present an illustration in Table 1 (third example): in this case, the user asks the LLM about the mother of Afonos II. The LLM gave a wrong answer (Queen Urraca of Castile instead of Dulce Berenguer of Barcelone), which can easily mislead less knowledgeable users.
# 2.3 Unique Challenge in the Era of LLM
Although the problem of hallucination has been extensively researched in conventional NLG tasks (Ji et al., 2023), hallucinations in LLMs bring forth a unique and complex set of challenges stemming from the training process and usage scenarios.
Massive training data. Unlike task-specific NLG models trained on limited-scaled datasets, LLMs are pre-trained on trillions of tokens. These
4 | 2309.01219#17 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 18 | Massive training data. Unlike task-specific NLG models trained on limited-scaled datasets, LLMs are pre-trained on trillions of tokens. These
4
Type Turn Example Ambiguity User What is the capital of France? Model The capital of a European country. User How can I replace a flat tire on my car? Incompleteness Model To replace a flat tire, there are four steps. First, youâll need to use a jack to lift the car. Second, please remove the lug nuts and flat tire. User Tell me about a typical elementary school teacher. Bias Model A typical elementary school teacher is a woman who is patient, nurtur- ing, and skilled at managing a classroom full of young children. Under-informativeness User Introduce the most popular task in NLP in 2022. Model Sorry, I donât know. Maybe you can check external search engines.
Table 2: Examples of various problems that LLMs may expose, in addition to hallucinations.
pre-training corpora are automatically collected from the web and often contain a significant amount of fabricated, outdated, or biased informa- tion (Penedo et al., 2023). Such inadequate data may lead LLMs to generate hallucinated content. The large data scale may also increase the diffi- culty of applying data-centric approaches to miti- gate the hallucination in LLMs. | 2309.01219#18 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 19 | culty in detecting and reducing input- and context- conflicting hallucination, as we can no longer re- sort to simple superficial patterns. Regarding fact- conflicting hallucinations, we also need to con- sider leveraging more knowledge sources for veri- fication. These factors collectively introduce sub- stantial new challenges.
# 2.4 Other Problems in LLMs
Versatility of LLMs. Conventional NLG mod- els are typically designed for a single task, and thus, hallucination studies on them are usually task-specific (Maynez et al., 2020; Wang and Sen- nrich, 2020; Xiao and Wang, 2021); however, cur- rent LLMs are expected to excel in multi-task, multi-lingual, and multi-domain settings (Bang et al., 2023; Chang et al., 2023). This expectation poses thorny challenges for both the evaluation In terms and mitigation of LLM hallucinations. of evaluation, LLMs are more commonly used for free-form text generation, and the lack of deter- ministic references in this setting complicates the automatic detection of hallucinations. Therefore, it is crucial to establish a comprehensive, reliable, and automatic evaluation benchmark. Regarding mitigation, the proposed methods should be ro- bustly effective, maintaining decent performance when being applied to various scenarios. | 2309.01219#19 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 20 | Besides hallucination, LLMs also present other problems. We outline some common issues below and present examples in Table 2 to help readers distinguish between them and hallucination.
Ambiguity. This type of issue arises when the LLM response is ambiguous, lending itself to mul- tiple interpretations. The response may not neces- sarily be incorrect, but it falls short of providing a useful answer to the user question (Tamkin et al., 2022). The first example in Table 2 exemplifies this issue. The desired answer is âParisâ, yet the LLM provides an ambiguous response.
Incompleteness. The incompleteness issue oc- curs when the generated response is incomplete or fragmented. As demonstrated in the second exam- ple in Table 2, the LLM only informs users of the first two steps in a four-step process for replacing a tire, resulting in an incomplete explanation. | 2309.01219#20 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 21 | Invisibility of errors. Compared to traditional NLG models, LLMs possess a significantly en- hanced writing capability and store a larger vol- ume of knowledge. Consequently, the false in- formation hallucinated by LLMs often appears highly plausible, to the extent that even humans may feel hard to detect. This amplifies the diffiBias. Bias in LLMs pertains to the manifestation of unfair or prejudiced attitudes within the gener- ated text. These biases may originate from train- ing data, which frequently encompasses historical texts, literature, social media content, and other sources. Such sources may inherently mirror so5
Benchmark Evaluation Size Task Format Metrics TruthfulQA FactualityPrompt FActScore KoLA-KC HaluEval FACTOR Gen&Dis Gen Gen Gen Dis Dis 817 16,000 500 190 Question Answering Text Completion Task Instructions Task Instructions 35,000 Question Answering&Task Instructions 4,030 Text Completion Truthfulness Ensemble FActScore Self-contrast Accuracy Accuracy | 2309.01219#21 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 22 | Table 3: Representative benchmarks that can be used for evaluating LLM hallucination including TruthfulQA (Lin et al., 2021), FactualityPrompt (Lee et al., 2022), FActScore (Min et al., 2023), KoLA-KC (Yu et al., 2023a), HaluEval (Li et al., 2023a) and FACTOR (Muhlgay et al., 2023). Note that KoLA (Yu et al., 2023a) is designed for benchmarking world knowledge of LLMs, where the Knowledge Creating (KC) task can be used to assess hallu- cination. These benchmarks all focus on the factuality aspect, but diverge in the following aspects: âEvaluationâ denotes how these benchmarks evaluate hallucination, either by regarding hallucination as a generation quality metric for LLM generations (Generation, referred to as Gen) or assessing whether the LLM can discriminate be- tween factual and non-factual statements (Discrimination, referred to as Dis); âTask Formatâ reflects different methods of prompting language models, e.g., knowledge-intensive question answering (QA), task instructions (TI) and context prefixes for text completion (TC). | 2309.01219#22 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 23 | cietal biases, gender bias, stereotypes, or discrim- inatory beliefs (Navigli et al., 2023). As shown in the third example in Table 2, the LLM portrays the teacher as a woman, which is a gender bias.
Under-informativeness. This kind of issue refers to the propensity of LLMs to evade answer- ing certain questions or providing specific infor- mation, even when they should be capable of do- ing so. For instance, due to imperfections in the re- ward model, RLHF may lead to over-optimization of LLMs, potentially leading to a state of under- informativeness (Gao et al., 2022). An example of this is presented in Table 2, where the LLM de- clines to respond to the user query. | 2309.01219#23 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 24 | et al. (2021) and Liu et al. (2022) evaluate mod- elsâ ability to identify context conflicts introduced when BERT (Devlin et al., 2019) performs blank- filling. Most benchmarks today evaluate the fact- conflicting hallucination of LLMs (Lin et al., 2021; Lee et al., 2022; Min et al., 2023; Yu et al., 2023a; Li et al., 2023a; Muhlgay et al., 2023), which refers to their tendency to generate factual errors. This is considered a critical issue in LLMs because it is challenging for users to identify and poses real-life risks.
In the upcoming sections, we will review exist- ing benchmark datasets and commonly used eval- uation metrics in §3.1 and §3.2, respectively.
# 3 Evaluation of LLM Hallucination
# 3.1 Evaluation Benchmarks | 2309.01219#24 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 25 | # 3 Evaluation of LLM Hallucination
# 3.1 Evaluation Benchmarks
Previous research has primarily concentrated on evaluating hallucination in specific natural lan- guage generation tasks, such as machine transla- tion (Guerreiro et al., 2023b; Dale et al., 2023), di- alogue generation (Dziri et al., 2021), question an- swering (Durmus et al., 2020) and text summariza- tion (Kryscinski et al., 2020; Maynez et al., 2020; Zhong et al., 2021). These works mainly focus on the input-conflicting hallucination facet, which is relatively easy for human users to identify given the source text, as shown in Table 1. Recently, studying this kind of hallucination in traditional NLG tasks has seen significant advancements. However, evaluating them in the setting of LLMs becomes more challenging due to the free-form and often long-form nature of LLM generation. Regarding context-conflicting hallucination, Cui
Various benchmarks have been proposed for eval- uating hallucination in LLMs. We present repre- sentative ones in Table 3 and discuss them based on their evaluation formats, task formats, and con- struction methods below. | 2309.01219#25 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 26 | Evaluation format. Existing benchmarks mainly evaluate hallucinations based on two different abilities of LLMs: the ability to generate factual statements or to discriminate them from non-factual ones. We present an example in Table 4 to showcase the difference between the two evaluation formats. Generation benchmarks (Lin et al., 2021; Lee et al., 2022; Min et al., 2023; Yu et al., 2023a) consider hallucination as a gen- eration characteristic, similar to fluency (Napoles et al., 2017) and coherence (Du et al., 2022), and evaluate the generated texts from LLMs. For
6
# Evaluation
# Sample
# Evaluation Sample
# Generation
# Generation
Question: What happens to you if you eat watermelon seeds? Reference: Nothing happens; You eat watermelon seeds; The watermelon seeds pass through your digestive system; You will not digest the watermelon seeds; The water- melon seeds will be excreted; There is an old wivesâ tale that watermelons will grow in your stomach, but this is impossible.
Discrimination Question: What happens to you if you eat watermelon seeds?
Choices: (A) The watermelon seeds pass through your digestive system (B) You grow watermelons in your stomach (C) You get sick (D) You have bad dreams Reference: (A) The watermelon seeds pass through your digestive system. | 2309.01219#26 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 27 | Table 4: Illustrative examples of two methods for evaluating hallucinations (Generation v.s. Discrimination).
instance, TruthfulQA (Lin et al., 2021) evaluates the truthfulness of LLMsâ responses to questions, while FActScore (Min et al., 2023) scrutinizes the factual accuracy of biographies generated by LLMs for specific individuals. In contrast, discrimination benchmarks (Li et al., 2023a; Muhlgay et al., 2023) consider LLMsâ ability to discriminate truthful statements from hallucinated ones. Specifically, HaluEval (Li et al., 2023a) requires the model to determine whether a state- ment contains hallucinated information, while FACTOR (Muhlgay et al., 2023) investigates whether the LLM assigns a higher likelihood to the factual statement compared to non-factual ones. Note that TruthfulQA (Lin et al., 2021) also supports discrimination format by offering a multiple-choice alternative to test a modelâs ability to identify truthful statements. | 2309.01219#27 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 28 | Task format. Existing benchmarks evaluate LLM hallucinations across various application tasks. Firstly, certain benchmarks (Lin et al., 2021; Li et al., 2023a) explore the issue of hal- lucination in the context of question-answering, evaluating the ability of LLMs to provide truthful answers to knowledge-intensive questions. Sec- ondly, FActScore (Min et al., 2023) and HaluE- val (Li et al., 2023a) employ task instructions, such as biography introduction instructions and 52K instructions from the Alpaca project (Taori et al., 2023), to prompt LLMs to generate re- sponses. The factuality of these responses is then evaluated. Thirdly, a line of work (Lee et al., 2022; Muhlgay et al., 2023) directly prompts LLMs to complete text given a prefix, and diagnoses potential hallucination during the generation of in- formative and factual statements. For instance, FACTOR (Muhlgay et al., 2023) considers con- text prefixes in Wikipedia documents, while Fac- tualityPrompt (Lee et al., 2022) designs prefixes specifically for factual or non-factual statements to elicit hallucinations. Table 5 provides samples under different task formats. | 2309.01219#28 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 29 | Construction methods. Most aforementioned benchmarks involve human annotators for dataset creation or quality assurance. TruthfulQA (Lin et al., 2021) carefully designs the questions to elicit imitative falsehoods, i.e., false statements with a high likelihood on the training distribu- tion. They then hire human annotators to fur- ther validate the agreement of golden answers. FActScore (Min et al., 2023) conducts a man- ual annotation pipeline to transform a long-form model generation into pieces of atomic statements. HaluEval (Li et al., 2023a) employs two construc- tion methods. For the automatic generation track, they design prompts to query ChatGPT to sam- ple diverse hallucinations and automatically fil- ter high-quality ones. For the human-annotation track, they hire human annotators to annotate the existence of hallucination in the model responses and list the corresponding spans. FACTOR (Muhl- gay et al., 2023) first uses external LLMs to gen- erate non-factual completion. Then, they man- ually validate whether the automatically created datasets meet the predefined requirements, i.e., they should be non-factual, fluent, and similar to the factual completion. To construct knowledge creation task, Yu et al. (2023a) build an annota7 | 2309.01219#29 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 30 | Task Format Sample Question Answering Question: The DutchBelgian television series that âHouse of Anubiâ was based on first aired in what year? Answer: 2006 Task Instruction Instruction: Give me 3 useful websites for C programming. Response: 1. GeeksforGeeks: This website provides tutorials and practice problems on C pro- gramming. 2. Programiz: This website offers tutorials, practice problems, and quizzes on C pro- gramming. 3. Codeacademy: This website provides free interactive tutorials on C programming. Text Completion Context: âSorryâ is a song by American singer Madonna from her tenth studio album Confessions on a Dance Floor (2005). It was written and produced by Madonna and Stuart Price, and released as the second single from the album on February 7, 2006. It later appeared on Celebration, her 2009 greatest hits album. An uptempo dance song, âSorryâ was one of the first tracks developed for the album and had numerous remix treatments before the ultimate version of the track was finalized. Completion: One of the remixes was done by the known band the Pet Shop Boys, featuring added lyrics by the band
Table 5: Illustrative examples for the task format where existing benchmarks evaluate hallucinations.
tion platform to facilitate fine-grained event anno- tations.
# 3.2 Evaluation Metrics | 2309.01219#30 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 31 | Table 5: Illustrative examples for the task format where existing benchmarks evaluate hallucinations.
tion platform to facilitate fine-grained event anno- tations.
# 3.2 Evaluation Metrics
The free-form and open-ended nature of language generation makes it difficult to evaluate the hal- lucinations produced by LLMs. The most com- monly used and reliable methods for evaluating hallucinations rely on human experts following specific principles (Lin et al., 2021; Lee et al., 2022; Min et al., 2023; Li et al., 2023a). It is worth noting that although existing benchmarks use hu- man evaluation to ensure reliability, they also seek to support automatic methods to facilitate effi- cient and consistent evaluation.
Human evaluation. To ensure precise and re- liable evaluation, existing benchmarks focus on designing dedicated human evaluation principles that involve manual annotation for evaluating each model-generated text. TruthfulQA (Lin et al., 2021) proposes a human-annotation guideline, which instructs annotators to assign one of thir- teen qualitative labels to the model output and ver- ify answers by consulting a reliable source. Lee et al. (2022) conduct human annotation to ver- ify the validity of the proposed automatic evalua- tion metrics. FactScore (Min et al., 2023) requires annotators to assign three labels to each atomic | 2309.01219#31 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 32 | fact: "Supported" or "Not-supported" for facts that are supported or unsupported by the knowledge source, and "Irrelevant" for statements that are not related to the prompt. While human evaluation of- fers reliability and interpretability, it may be in- consistent due to subjectivity across annotators. It is also prohibitively expensive due to the labor- intensive annotation processes required each time a new model needs to be evaluated. | 2309.01219#32 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 33 | Model-based automatic evaluation. Several studies (Lin et al., 2021; Min et al., 2023; Zha et al., 2023; Mündler et al., 2023) have devised model-based methods as a proxy for human eval- uation. Specifically, TruthfulQA (Lin et al., 2021) trains a GPT-3-6.7B model to classify answers (as true or false) to questions based on their col- lected human annotations. They observe that the fine-tuned GPT-judge model achieves a validation accuracy of 90-96% and effectively generalizes to new answer formats. AlignScore (Zha et al., 2023) establishes a unified function to evaluate the factual consistency between two texts. This alignment function is trained on a large dataset spanning seven tasks, including Natural Language Inference (NLI), Question Answering (QA), and paraphrasing. Differently, Min et al. (2023) and Mündler et al. (2023) harness the capabilities of off-the-shelf models to serve as automatic evalu8 | 2309.01219#33 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 34 | ators. In particular, FactScore (Min et al., 2023) begins by employing a passage retriever, such as Generalizable T5-based Retrievers (Ni et al., 2022), to gather pertinent information. Subse- quently, an evaluation model, such as LLaMA- 65B (Touvron et al., 2023a), uses the retrieved knowledge to determine the truthfulness of a state- ment. They further adopt micro F1 scores and er- ror rates to assess the reliability of the automatic metrics in comparison with human evaluation. Mündler et al. (2023) design dedicated prompts to query an evaluator LLM (e.g., ChatGPT (OpenAI, 2023a)) whether the subjective LLM contradicts itself under the same context, and report classifi- cation metrics, including precision, recall, and F1 score. | 2309.01219#34 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 35 | Rule-based automatic evaluation. For discrim- ination benchmarks (Li et al., 2023a; Muhlgay et al., 2023), common rule-based classification metrics such as accuracy can be directly applied to evaluating the ability of LLMs to discriminate factual statements from non-factual ones. Bang et al. (2023) also compute accuracy to reflect the modelâs ability to identify misinformation on sci- entific and social claims related to COVID-19. In contrast, another line of research (Lee et al., 2022; Yu et al., 2023a) focuses on devising heuristic methods specifically designed for assessing hal- lucination. FactualityPrompt (Lee et al., 2022) combines named-entity-based metric and textual entailment-based metric to capture different as- pects of factuality. To evaluate knowledge cre- ation, Yu et al. (2023a) devise a self-contrast met- ric to quantify model consistency in generating factual statements. They accomplish this by com- paring model-generated texts with and without in- cluding golden knowledge as part of the prompts based on Rouge-L (F1) (Lin, 2004).
# 4 Sources of LLM Hallucination
In this section, we aim to explore the various fac- tors that can induce hallucinations within LLMs. We identify four primary sources that span differ- ent stages of the LLM life cycle. | 2309.01219#35 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 36 | In this section, we aim to explore the various fac- tors that can induce hallucinations within LLMs. We identify four primary sources that span differ- ent stages of the LLM life cycle.
LLMs lack relevant knowledge or internalize false knowledge. During the pre-training phase, LLMs amass a vast amount of knowledge from an enormous volume of training data, which is then stored within their model parameters. When asked to answer questions or complete tasks, LLMs of9
ten exhibit hallucinations if they lack pertinent knowledge or have internalized false knowledge from the training corpora. | 2309.01219#36 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 37 | Li et al. (2022c) discover that LLMs sometimes misinterpret spurious correlations, such as posi- tionally close or highly co-occurring associations, as factual knowledge. Specifically, McKenna et al. (2023) investigate the hallucination prob- lem within the context of the natural language inference (NLI) task and find a strong correla- tion between LLM hallucination and the distri- bution of the training data. For example, they observe that LLMs are biased toward affirm- ing test samples where the hypotheses are at- tested in the training data. Besides, Dziri et al. (2022) argue that hallucination is also present in human-generated corpora (can be reflected as out- dated (Liska et al., 2022; Luu et al., 2022), bi- ased (Chang et al., 2019; Garrido-Muñoz et al., 2021), or fabricated (Penedo et al., 2023) expres- sion). As a result, LLMs are prone to replicate or even amplify this hallucination behavior. Wu et al. (2023b) reveal that the memorizing and reason- ing performance of PLMs for ontological knowl- edge is less than | 2309.01219#37 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 38 | hallucination behavior. Wu et al. (2023b) reveal that the memorizing and reason- ing performance of PLMs for ontological knowl- edge is less than perfect. Sun et al. (2023a) put forward a benchmark named Head-to-Tail to eval- uate the factual knowledge of LLMs for entities with different levels of popularity. Experimental results suggest that LLMs still perform unsatisfac- torily on torso and tail facts. Furthermore, Zheng et al. (2023c) identified two additional abilities as- sociated with knowledge memorization that en- able LLMs to provide truthful answers: knowledge recall and knowledge reasoning. Deficiencies in either of these abilities can lead to hallucinations. | 2309.01219#38 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 39 | LLMs sometimes overestimate their capacities. Some studies have been conducted with the aim of understanding whether language models can assess the accuracy of their responses and rec- ognize their knowledge boundaries. Kadavath et al. (2022) conduct experiments that demon- strate LLMsâ ability to evaluate the correctness of their own responses (self-evaluation) and de- termine whether they know the answer to a given the question. However, for very large LLMs, distribution entropy of correct and incorrect an- swers could be similar, suggesting that LLMs are equally confident when generating incorrect an- swers as they are generating correct ones. Yin et al. (2023) also evaluate the capacity of pop- ular LLMs to identify unanswerable or unknowable questions. Their empirical study reveals that even the most advanced LLM, GPT4 (OpenAI, 2023b), shows a significant performance gap when compared to humans. Ren et al. (2023) note a correlation between accuracy and confidence, but such confidence often surpasses the actual capa- bilities of LLMs, namely over-confidence. In gen- eral, LLMsâ understanding of factual knowledge boundaries may be imprecise, and they frequently exhibit over-confidence. Such over-confidence misleads LLMs to fabricate answers with unwar- ranted certainty. | 2309.01219#39 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 40 | Problematic alignment process could mislead LLMs into hallucination. LLMs typically un- dergo an alignment process following pre-training, where they receive further training on curated instruction-following examples to align their re- sponses with human preferences. However, when trained on instructions for which LLMs have not acquired prerequisite knowledge from the pre- training phase, this is actually a misalignment pro- cess that encourages LLMs to hallucinate (Gold- berg, 2023; Schulman, 2023). Another potential issue is sycophancy, where LLMs may generate responses that favor the userâs perspective rather than providing correct or truthful answers, which can result in hallucination (Perez et al., 2022; Rad- hakrishnan et al., 2023; Wei et al., 2023b). | 2309.01219#40 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 41 | The generation strategy employed by LLMs has potential risks. Todayâs most advanced LLMs generate responses sequentially, outputting one token at a time. Zhang et al. (2023a) discover that LLMs sometimes over-commit to their early mistakes, even when they recognize they are in- correct. In other words, LLMs may prefer snow- balling hallucination for self-consistency rather than recovering from errors. This phenomenon is known as hallucination snowballing. Azaria and Mitchell (2023) also contend that local opti- mization (token prediction) does not necessarily ensure global optimization (sequence prediction), and early local predictions may lead LLMs into situations where it becomes challenging to formu- late a correct response. Lee et al. (2022) highlight that the randomness introduced by sampling-based generation strategies, such as top-p and top-k, can also be a potential source of hallucination.
10
LLM Pre-train Data Size GLM (Zeng et al., 2022) BLOOM (Scao et al., 2022) GPT-3 (Brown et al., 2020) LLaMA (Touvron et al., 2023a) Llama 2 (Touvron et al., 2023b) 400B tokens 366B tokens 300B tokens 1.4T tokens 2T tokens
Table 6: The pre-training data size of popular LLMs. | 2309.01219#41 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 42 | Table 6: The pre-training data size of popular LLMs.
# 5 Mitigation of LLM Hallucination
In this section, we provide an extensive review of recent studies focused on mitigating LLM halluci- nations. To make the structure clear, we categorize existing mitigation works based on the timing of their application within the LLM life cycle.
# 5.1 Mitigation during Pre-training
Existing work (Zhou et al., 2023a) argues that the knowledge of LLMs is mostly acquired during the pre-training phase. The presence of noisy data such as misinformation in the pre-training corpus could corrupt the parametric knowledge of LLMs, which is a significant factor contributing to hallu- cinations, as previously discussed in § 4. Akyürek et al. (2022) also demonstrate that it is possible to trace the factual knowledge acquired by language models back to their training data. Consequently, an intuitive approach to mitigating hallucinations could involve manually or automatically curating the pre-training corpus to minimize unverifiable or unreliable data as much as possible. | 2309.01219#42 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 43 | Before the LLM era, there existed a series of efforts dedicated to manually eliminating noisy training data to mitigate hallucinations. For in- stance, Gardent et al. (2017) focus on the data-to- text task and enlist human annotators to manually compose clean and accurate responses based on given knowledge bases. It has been shown to ef- fectively reduce hallucinations with such curated training data. Similarly, Wang (2019) manually refine the text in existing table-to-text datasets and observe that this process also substantially alle- viates fact hallucinations. Besides, Parikh et al. (2020) instruct annotators to revise verified sen- tences from Wikipedia rather than directly creat- ing new sentences when constructing table-to-text training data. This approach has also been proven to result in improved factuality of results.
With the advent of the LLM era, curating train- ing data during pre-training has become increas- ingly challenging due to the vast scale of pre- training corpora (as exemplified in Table 6). For | 2309.01219#43 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 45 | instance, Llama 2 (Touvron et al., 2023b) conducts pre-training on about two trillion tokens. There- fore, compared to manual curation, a more practi- cal approach today could be automatically select- ing reliable data or filtering out noisy data. For example, the pre-training data of GPT-3 (Brown et al., 2020) is cleaned by using similarity to a range of high-quality reference corpora. The de- velopers of Falcon (Penedo et al., 2023) carefully extract high-quality data from the web via heuris- tic rules and prove that properly curated pertaining corpora lead to powerful LLMs. Li et al. (2023f) propose phi-1.5, a 1.3 billion parameter LLMs pre-trained on filtered âtextbook-likeâ synthetic data, which exhibits many traits of much larger LLMs. In order to mitigate hallucinations, current LLMs tend to collect pre-training data from credi- ble text sources. The developers of Llama 2 (Tou- vron et al., 2023b) strategically up-sample data from highly factual sources, such as Wikipedia, when constructing the | 2309.01219#45 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 46 | of Llama 2 (Tou- vron et al., 2023b) strategically up-sample data from highly factual sources, such as Wikipedia, when constructing the pre-training corpus. Lee et al. (2022) propose to prepend the topic pre- fix to sentences in the factual documents to make each sentence serve as a standalone fact during pre-training. Concretely, they treat the document name as the topic prefix and observe this method improves LMsâ performance on TruthfulQA. | 2309.01219#46 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 47 | Summary & Discussion. The mitigation of hal- lucinations during pre-training is primarily cen- tred around the curation of pre-training corpora. Given the vast scale of existing pre-training cor- pora, current studies predominantly employ sim- ple heuristic rules for data selection and filtering. A potential avenue for exploration could be devis- ing more effective selection or filtering strategies.
# 5.2 Mitigation during SFT
As a common practice, current LLMs collec- tively undergo the process known as supervised fine-tuning (SFT) to elicit their knowledge ac- quired from pre-training and learn how to inter- act with users (Wang et al., 2023c; Zhang et al.,
11
Teach LLMs to hallucinate =) Parametric Knowledge icles SFT Data
Figure 3: The SFT data usually contains samples that exceed LLMsâ parametric knowledge, which may re- sult in hallucinations. | 2309.01219#47 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 48 | Figure 3: The SFT data usually contains samples that exceed LLMsâ parametric knowledge, which may re- sult in hallucinations.
2023b). SFT generally involves first annotating or collecting massive-task instruction-following data (Chung et al., 2022; Taori et al., 2023), followed by fine-tuning pre-trained foundational LLMs on this data using maximum likelihood es- timation (MLE) (Wei et al., 2021). By employing well-designed SFT strategies, many recent stud- ies claim to have built LLMs that achieve perfor- mance on par with ChatGPT (Wang et al., 2023b). | 2309.01219#48 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 49 | Similar to pre-training, one potential approach to reduce hallucination during the SFT stage could be curating the training data. Given the rela- tively small volume of SFT data (refer to Table 7), both manual and automatic curation are viable options here. Zhou et al. (2023a) have meticu- lously constructed an instruction-tuning dataset, comprising 1,000 samples annotated by human ex- perts. Some other studies (Chen et al., 2023b; Cao et al., 2023; Lee et al., 2023) have employed an automatic selection of high-quality instruction- tuning data, by leveraging LLMs as evaluators or designing specific rules. Experimental results on hallucination-related benchmarks, such as Truth- fulQA (Lin et al., 2021), suggest that LLMs fine- tuned on such curated instruction data demonstrate higher levels of truthfulness and factuality com- pared to LLMs fine-tuned on uncurated data. Fur- thermore, Mohamed et al. (2023) propose the inte- gration of domain-specific knowledge sets into the SFT data, which aims to reduce hallucinations that arise from a lack of relevant knowledge. | 2309.01219#49 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 51 | that this method simply mimics behavior without learning a strategy to achieve the final goal. The SFT process of LLMs can be viewed as a spe- cial case of behavior cloning, where LLMs learn the format and style of interaction by mimicking humans. As for LLMs, despite having encoded a substantial amount of knowledge into their pa- rameters, there remains knowledge that surpasses their capacity (Yin et al., 2023; Ren et al., 2023). By cloning human behaviors during SFT, LLMs learn to respond to all questions with a predom- inantly positive tone, without assessing whether these questions exceed their knowledge bound- aries (see Figure 3). As a result, during inference, if prompted to answer questions related to un- learned knowledge, they are likely to confidently produce hallucinations. One way to remit this problem can be the honesty-oriented SFT, which means introducing some honest samples into the SFT data. The honest samples refer to responses that admit incompetence, such as âSorry, I donât knowâ. The Moss project (Sun et al., 2023b) open- sourced their SFT data, which includes such hon- est samples. We observed that models tuned with them could learn to refuse to answer specific ques- tions, therefore helping reduce hallucinations. | 2309.01219#51 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 52 | Summary & Discussion. Curating the training data is one approach for mitigating hallucinations during the SFT phase. Thanks to the acceptable volume of SFT data, they can be manually curated by human experts. Recently, we have performed a preliminary human inspection and observed that some widely-used synthetic SFT data, such as Al- paca (Taori et al., 2023), contains a considerable amount of hallucinated answers due to the lack of human inspection. This calls for careful attention when researchers try to build SFT datasets based on self-instruct (Wang et al., 2023c).
Previous work also pointed out that the SFT process may inadvertently introduce hallucina- tions, by forcing LLMs to answer questions that surpass their knowledge boundaries. Some re- searchers have suggested honesty-oriented SFT as a solution. However, we argue this method has two main problems. Firstly, it exhibits limited gen- eralization capabilities towards out-of-distribution (OOD) cases. Secondly, the annotated honest samples just reflect the incompetence and uncer- tainty of annotators rather than those of LLMs, as annotators are unaware of LLMsâ real knowledge boundaries. Such challenges make solving this is12
Situation Reward Value Unhedged Correct Hedged Correct Uninformative Hedged Wrong Unhedged Wrong +1 +0.5 0 -2 -4 | 2309.01219#52 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 53 | Situation Reward Value Unhedged Correct Hedged Correct Uninformative Hedged Wrong Unhedged Wrong +1 +0.5 0 -2 -4
Table 8: An example of reward design for mitigating LLM hallucinations through RL (Schulman, 2023).
sue during SFT sub-optimal.
# 5.3 Mitigation during RLHF
Nowadays, many researchers attempt to fur- ther improve the supervised fine-tuned LLMs via reinforcement learning from human feedback (RLHF) (Fernandes et al., 2023). This process consists of two steps: 1) train a reward model (RW) as the proxy for human preference, which aims to assign an appropriate reward value to each LLM response; 2) optimize the SFT model with the reward modelâs feedback, by using RL algo- rithms such as PPO (Schulman et al., 2017). | 2309.01219#53 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 54 | Leveraging human feedback not only closes the gap between machine-generated content and hu- man preference but also helps LLMs align with desired criteria or goals. One commonly used criterion today is â3Hâ, which denotes helpful, honest, and harmless (Ouyang et al., 2022; Bai et al., 2022; Zheng et al., 2023b). The hon- est aspect here just refers to the minimization of hallucinations in LLM responses. Current ad- vanced LLMs, such as InstructGPT (Ouyang et al., 2022), ChatGPT (OpenAI, 2023a), GPT4 (Ope- nAI, 2023b), and Llama2-Chat (Touvron et al., 2023b), have collectively considered this aspect during RLHF. For example, GPT4 uses synthetic hallucination data to train the reward model and perform RL, which increases accuracy on Truth- fulQA (Lin et al., 2021) from about 30% to 60%. Moreover, Lightman et al. (2023) use the process supervision to detect and mitigate hallucinations for reasoning tasks, which provides feedback for each intermediate reasoning step. | 2309.01219#54 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 56 | User Input System Response Introduce the film âThe Only Thingâ (Four Flaming Days) to me. âThe Only Thingâ (Four Flaming Days) is a captivating drama film that delves into the complexities of human emotions, relationships, and the power of resilience. Set against the backdrop of a small town, the story unfolds over the course of four intense, life-changing days that leave an indelible mark on the lives of the characters involved. The film follows the journey of a young woman, Emily, who returns to her hometown after several years away, only to find it in the grip of an unprecedented crisis. As the town faces an impending disaster, Emily is forced to confront her own demons, rekindle old friendships, and navigate the intricate web of relationships that tie her to the townâs inhabitants. As the four days progress, Emily and the people around her are pushed to their limits, as they grapple with love, loss, betrayal, and the harsh realities of life. The film delves deep into the human psyche, exploring themes of forgiveness, redemption, and the power of human connection in the face of adversity. User Input System Response What genre of movie is | 2309.01219#56 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 58 | Table 9: A real example of the over-conservative phenomenon of ChatGPT (July 2023 Version). As demonstrated in this example, ChatGPT refuses to provide a fairly clear answer it already knows, specifically, the genre of "The Only Thing" being a drama film (highlighted in red within the first response).
and LLM knowledge boundaries. In light of this, Schulman (2023) propose to solve this problem during RLHF. They design a special reward func- tion just for mitigating hallucinations, as shown in Table 8. âUnhedged/Hedged Correct/Wrongâ here means the LLM provides correct or wrong answers with a positive or hesitant tone. âUnin- formativeâ denote the safe answers like âI donât knowâ. The core idea is to encourage LLMs to challenge the premise, express uncertainty, and commit incapability by learning from specially de- signed rewards. This method, which we refer to as honesty-oriented RL, offers several advantages over honesty-oriented SFT. The primary benefit is that it allows LLMs to freely explore their knowl- edge boundaries, thereby enhancing their general- ization capabilities to OOD cases. Additionally, it reduces the need for extensive human annotation and eliminates the requirement for annotators to guess the knowledge boundaries of LLMs. | 2309.01219#58 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 59 | Summary & Discussion. Reinforcement learn- ing can guide LLMs in exploring their knowl- edge boundaries, enabling them to decline to an- swer questions beyond their capacity rather than fabricating untruthful responses. However, we note this approach also poses unique challenges. For instance, RL-tuned LLMs may exhibit over- conservatism due to an imbalanced trade-off be- tween helpfulness and honesty (Ouyang et al., 2022). An example of this is illustrated in Ta- ble 9. As observed in this case, ChatGPT tends to be overly hedged and refrains from providing a clear answer that it already knows, as evidenced in another dialogue turn. This could be attributed to the unreasonable design of the reward function
or the poor quality of the training data for the re- ward model. We hope future work can take such problems into consideration.
# 5.4 Mitigation during Inference
Compared with the aforementioned training-time mitigation approaches, mitigating hallucinations in the inference time could be more cost-effective and controllable. Therefore, most existing studies focus on this direction, which we will introduce in detail in the following sections.
# 5.4.1 Designing Decoding Strategies
Decoding strategies, such as greedy decoding and beam search decoding, determine how we choose output tokens from the probability distribution generated by models (Zarrieà et al., 2021). | 2309.01219#59 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 60 | Decoding strategies, such as greedy decoding and beam search decoding, determine how we choose output tokens from the probability distribution generated by models (Zarrieà et al., 2021).
Lee et al. (2022) carry out a factuality assess- ment of content generated by LLMs using differ- ent decoding strategies. They find that nucleus sampling (a.k.a top-p sampling) (Holtzman et al., 2019) falls short of greedy decoding in terms of factuality. They argue that this underperformance could be attributed to the randomness introduced by top-p sampling to boost diversity, which may inadvertently lead to hallucinations since LLMs tend to fabricate information to generate diverse responses. In view of this, they introduce a decod- ing algorithm termed factual-nucleus sampling, which aims to strike a more effective balance be- tween diversity and factuality by leveraging the strengths of both top-p and greedy decoding.
Dhuliawala et al. (2023) develop a decoding framework known as the Chain-of-Verification (COVE). This framework is based on the obser- vation that independent verification questions typ13 | 2309.01219#60 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 61 | Method Timing of Using Knowledge Source Application Task Generation-Time WebGPT (Nakano et al., 2021) Adaptive-Retrieval (Mallen et al., 2023) Generation-Time Generation-Time ReACT (Yao et al., 2022) Generation-Time RETRO (Borgeaud et al., 2022) Generation-Time Chain-of-Knowledge (Li et al., 2023d) Post-Processing RARR (Gao et al., 2023a) Post-Processing Verify-then-Edit (Zhao et al., 2023b) Post-Processing LLM-Augmenter (Peng et al., 2023a) Post-Processing REFEED (Yu et al., 2023b) Post-Processing CRITIC (Gou et al., 2023) Post-Processing FacTool (Chern et al., 2023) Search API Wikipedia Wikipedia Unstructured Corpus Structured Knowledge Base Search API Wikipedia, Search API, etc Web documents, Databases Wikipedia Search API, Code Executor, Calculator, etc Search API, Code Executor, Calculator, etc QA & Reasoning & Generation QA QA QA & FV LM & QA QA & FV & Decision QA QA QA QA, Dialogue QA & Program & Toxicity | 2309.01219#61 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 62 | Table 10: A summary of some recent studies on resorting to external knowledge to mitigate hallucinations. We use abbreviations for some application task names, including QA (Question Answering), FV (Fact Verification), and LM (Language Modeling).
ically yield more accurate facts than those pre- sented in long-form answers. The COVE frame- work initially plans verification questions, and then answers these questions to ultimately produce an enhanced, revised response. Experimental re- sults on list-based questions, closed book QA, and long-form text generation demonstrate that COVE can effectively mitigate hallucination.
Another work, Li et al. (2023b), introduces a novel Inference-Time Intervention (ITI) method to improve the truthfulness of LLMs. This method is based on the assumption that LLMs possess latent, interpretable sub-structures associated with factu- ality. The ITI method comprises two steps: 1) fitting a binary classifier on top of each attention head of the LLM to identify a set of heads that ex- hibit superior linear probing accuracy for answer- ing factual questions, and 2) shifting model activa- tions along these factuality-related directions dur- ing inference. The ITI method leads to a substan- tial performance improvement on the TruthfulQA benchmark (Lin et al., 2021). | 2309.01219#62 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 63 | Distinct from the aforementioned studies, Shi et al. (2023b) instead concentrates on the retrieval- augmentation setting. Prior research has shown that LLMs sometimes fail to adequately attend to retrieved knowledge when addressing downstream tasks, particularly when the retrieved knowl- edge conflicts with the parametric knowledge of LLMs (Zhou et al., 2023b; Xie et al., 2023). To address this issue, Shi et al. (2023b) propose a straightforward context-aware decoding (CAD) strategy. The core idea of CAD is to perform a contrastive ensemble of pθ(yt | x, c, y<t) and pθ(yt | x, y<t), where θ represents the LM, x is the input query, c is the context, y is the response, and t is the time step. pθ(yt | x, c, y<t) means the gen- eration probability distribution of t-th token when | 2309.01219#63 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 64 | given the context while pθ(yt | x, y<t) denotes the distribution only considering the query. The CAD method aims to compel LLMs to pay more at- tention to contextual information instead of over- relying their own parametric knowledge to make decisions. Experimental results show that CAD effectively elicits the ability of LLMs to exploit retrieved knowledge and thus reduces factual hal- lucinations on downstream tasks. Another work, DoLA (Chuang et al., 2023), also employ the idea of contrastive decoding to reduce hallucination. However, they contrast the generation probabili- ties from different layers of LLMs, as they find that linguistic and factual information is encoded in distinct sets of layers.
Summary & Discussion. Designing decoding strategies to mitigate hallucinations in LLMs dur- ing inference is typically in a plug-and-play man- ner. Therefore, this method is easy to deploy, mak- ing it promising for practical applications. How- ever, for this approach, most existing works re- quire accessing the token-level output probabili- ties, while a substantial number of current LLMs can only return generated content through lim- ited APIs (e.g., ChatGPT). Consequently, we en- courage future research in this direction to explore within a more strict black-box setting.
# 5.4.2 Resorting to External Knowledge | 2309.01219#64 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 65 | # 5.4.2 Resorting to External Knowledge
Using external knowledge as supplementary ev- idence to assist LLMs in providing truthful re- sponses recently represents a burgeoning solution (Ren et al., 2023; Mialon et al., 2023). This ap- proach typically consists of two steps. The first step entails accurately obtaining knowledge re- lated to the user instructions. Once useful knowl- edge has been achieved, the second step involves
14
leveraging such knowledge to guide the genera- tion of the responses. We provide a comprehensive review of the latest progress in this direction, fo- cusing on the specific strategies employed in these two steps, respectively. We also present a sum- mary of recent studies in Table 4.
Knowledge acquisition. LLMs have internal- ized vast amounts of knowledge into their pa- rameters through extensive pre-training and fine- tuning, which can be referred to as parametric knowledge (Roberts et al., 2020). However, incor- rect or outdated parametric knowledge can easily lead to hallucinations (Xie et al., 2023). To rem- edy this, researchers have proposed acquiring reli- able, up-to-date knowledge from credible sources as a form of hot patching for LLMs (Lewis et al., 2020b; Li et al., 2022a). We summarize the two primary sources of such knowledge as follows. | 2309.01219#65 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 66 | The major- ity of existing works retrieve information from external knowledge bases, such as large-scale unstructured corpora (Cai et al., 2021; Borgeaud et al., 2022), structured databases (Liu, 2022; Li et al., 2023d), spe- cific websites like Wikipedia (Yao et al., 2022; Peng et al., 2023a; Li et al., 2023c; Yu et al., 2023b), or even the entire Inter- net (Lazaridou et al., 2022; Yao et al., 2022; Gao et al., 2023a; Liu et al., 2023c). The evidence retrieval process typically employs various sparse (e.g., BM25 (Robertson et al., 2009)) or dense (e.g., PLM-based meth- ods (Zhao et al., 2022)) retrievers. Search engines, such as Google Search, can also be viewed as a special kind of information re- triever (Nakano et al., 2021; Lazaridou et al., 2022; Yao et al., 2022; Gao et al., 2023a). Be- sides, Luo et al. (2023c) propose the param- eter knowledge guiding framework which re- trieves | 2309.01219#66 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 68 | (2) External tools. In addition to solely retriev- ing information from knowledge bases, there are also many other tools that can provide valuable evidence to enhance the factuality of content generated by LLMs (Mialon et al.,
15
Q, vw a0 Qe cu s Knowled x nowledge & Retriever LLM ; x s e r Code Intermediate xe Knowledge Executor Response Ss Search uM Eg » Pd Fixer Knowledge Sources Ss (b) Post-hoc Correction a4 ORES (a) Generation-time Supplement
Figure 4: The illustrations of two distinct methods for utilizing external knowledge to reduce hallucinations in LLMsâ responses.
2023; Qin et al., 2023; Qiao et al., 2023). For instance, FacTool (Chern et al., 2023) em- ploys different tools to help detect hallucina- tions in LLMs for specific downstream tasks, such as search engine API for Knowledge- based QA, code executor for code gener- ation, and Google Scholar API for scien- tific literature review. CRITIC (Gou et al., 2023) also enables LLMs to interact with multiple tools and revise their responses au- tonomously, which has been proven to effec- tively improve truthfulness. | 2309.01219#68 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 69 | Knowledge utilization. Once relevant knowl- edge is obtained, it could be employed at differ- ent stages to mitigate hallucinations within LLMs. Existing methods for knowledge utilization can be roughly divided into two categories, as detailed below and illustrated in Figure 4.
(1) Generation-time supplement. The most straightforward approach to utilize retrieved knowledge or tool feedback is to directly concatenate them with user queries before prompting LLMs (Shi et al., 2023c; Mallen et al., 2023; Ram et al., 2023). This method is both effective and easy to implement. Such knowledge is also referred to as con- text knowledge (Shi et al., 2023b). Existing studies have demonstrated that LLMs pos- sess a strong capability for in-context learn- ing (Dong et al., 2022), which enables them to extract and utilize valuable information from context knowledge to rectify nonfactual claims they previously generated.
(2) Post-hoc correction. Another common prac- tice involves constructing an auxiliary fixer | 2309.01219#69 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 70 | to rectify hallucinations during the post- processing stage (Cao et al., 2020; Zhu et al., 2021; Fabbri et al., 2022). The fixer can be either another LLM (Peng et al., 2023a; Zhang et al., 2023d; Chern et al., 2023; Gou et al., 2023) or a specific small model (Chen et al., 2023a). Such fix- ers first interact with external knowledge sources to gather sufficient evidence, and then correct hallucinations. For example, RARR (Gao et al., 2023a) directly prompts an LLM to ask questions about the content that needs to be corrected from multiple per- spectives. Then it uses search engines to re- trieve relevant knowledge. The LLM-based fixer finally makes corrections based on re- trieved evidence. The Verify-then-Edit ap- proach (Zhao et al., 2023a) aims to enhance the factuality of predictions by post-editing reasoning chains based on external knowl- edge sourced from Wikipedia. To achieve better performance, LLM-Augmenter (Peng et al., 2023a) prompts LLMs to summarize retrieved knowledge before feeding it into the fixer. Moreover, FacTool (Chern | 2309.01219#70 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 72 | Summary & Discussion. Resorting to external knowledge to mitigate hallucinations in LLMs of- fers several advantages. Firstly, this method cir- cumvents the need for modifying LLMs, making it a plug-and-play and efficient solution. Secondly, it facilitates the easy transfer of proprietary knowl- edge (e.g., a companyâs internal data) and real- time updated information to LLMs. Lastly, this approach enhances the interpretability of infor- mation generated by LLMs by allowing the trac- ing of generation results back to the source evi- dence (Gao et al., 2023b; Yue et al., 2023). How- ever, this direction also presents some remaining challenges. We discuss some of them below.
(1) Knowledge verification. In the era of LLMs, the external knowledge source could extend beyond a single document corpus or a spe- cific website to encompass the entire Internet. However, the information from the Internet is in the wild, which means they may also be fabricated, or even generated by LLMs them- selves (Alemohammad et al., 2023). How to
16 | 2309.01219#72 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 73 | 16
User Query: What is the height __¢@% Answer: =] of Mount Kilimanjaro? (ea) P5932] (a) logit-based method User Query: What is the height =] of Mount Kilimanjaro? Trawenaheheann + CG © is 3932 meters. Tom Please provide your confidence eopconticents level (0-100). (b) verbalize-based method Answer: The height is 5932 meters. Answer: The height 2. User Query: What is the height __. 6 is 5895 meters. of Mount Kilimanjaro? Answer: The height is 5921 meters. (©) consistency-based method
Figure 5: The illustrations of three typical methods for estimating LLM uncertainty. In the example of the logit-based method, we use the red/green background to distinct tokens with low/high generation probabili- ties. In the example of the consistency-based method, the responses are acquired from multiple sampling.
verify the authenticity of retrieved knowledge from the Internet is an open and challenging problem to be solved. | 2309.01219#73 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 74 | verify the authenticity of retrieved knowledge from the Internet is an open and challenging problem to be solved.
(2) Performance/efficiency of retriever/fixer. The performance of the retriever/fixer plays a vital role in ensuring the effects of hallu- cination mitigation. Future work may con- sider jointly optimising the whole working flow (retrieverâLLMâfixer) via reinforce- ment learning (Qiao et al., 2023) or other techniques. Besides, the efficiency of the retriever/fixer is another important factor to be considered, as the generation speed of existing LLMs is already a significant bur- den (Ning et al., 2023).
(3) Knowledge conflict. As introduced be- fore, the retrieved knowledge may conflict with the parametric knowledge stored by LLMs (Qian et al., 2023). Shi et al. (2023b) reveal that LLMs may fail to sufficiently ex- ploit retrieved knowledge when knowledge conflict happens. Xie et al. (2023) take a more cautious look at this phenomenon. How to fully utilize context knowledge is an under-explored question. For example, Liu et al. (2023d) find the performance of retrieval-augmented LLMs significantly de- grades when they must access evidence in the middle of long contexts. | 2309.01219#74 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 75 | 5.4.3 Exploiting Uncertainty Uncertainty serves as a valuable indicator for de- tecting and mitigating hallucinations during the
inference process (Manakul et al., 2023). Typi- cally, it refers to the confidence level of model out- puts (Jiang et al., 2021; Huang et al., 2023a; Duan et al., 2023). Uncertainty can assist users in de- termining when to trust LLMs. Provided that the uncertainty of LLM responses can be accurately characterized, users can filter out or rectify LLMsâ claims with high uncertainty since such claims are more prone to be fabricated ones (Lin et al., 2023). Generally speaking, methods for estimating the uncertainty of LLMs can be categorized into three types (Xiong et al., 2023), as listed below. To fa- cilitate understanding, we also present illustrative examples for these methods in Figure 5.
(1) Logit-based estimation. The first method is the logit-based method, which requires ac- cess to the model logits and typically mea- sures uncertainty by calculating token-level probability or entropy. This method has been widely used in the machine learning commu- nity (Guo et al., 2017). | 2309.01219#75 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 76 | (2) Verbalize-based estimation. The second is the verbalize-based method, which involves directly requesting LLMs to express their un- certainty, such as using the following prompt: âPlease answer and provide your confidence score (from 0 to 100).â This method is effective due to the impressive verbal and instruction-following capabilities of LLMs. Notably, Xiong et al. (2023) further suggest using chain-of-thoughts prompts (Wei et al., 2022) to enhance this method.
(3) Consistency-based estimation. The third is the consistency-based method (Wang et al., 2022; Shi et al., 2022; Zhao et al., 2023a). This method operates on the assumption that LLMs are likely to provide logically incon- sistent responses for the same question when they are indecisive and hallucinating facts.
Several recent studies have leveraged uncer- tainty estimation for detecting and mitigating hal- lucinations in LLMs. SELFCHECKGPT (Man- akul et al., 2023) is the first framework to detect LLM hallucinations based on uncertainty mea- surement in a zero-resource and black-box set- ting. They employ a consistency-based approach for uncertainty estimation. A non-trivial chal- lenge in SELFCHECKGPT is determining how to measure the consistency of different responses.
17 | 2309.01219#76 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 77 | 17
Manakul et al. (2023) perform experiments with BERTScore (Zhang et al., 2019), QA-based met- rics (Wu and Xiong, 2023) and n-gram metrics. They finally find that a combination of these ap- proaches yields the best results. Mündler et al. (2023) directly utilize an additional LLM to as- sess whether two LLM responses are logically contradictory given the same context (Luo et al., 2023b), which means at least one of them is hal- lucinated. Consequently, they employ another LLM to revise such self-contradictory hallucina- tions from two responses. Agrawal et al. (2023) further adopt the verbalize-based method to eval- uate the hallucination rate of LLMs for fabricat- ing references. Varshney et al. (2023), on the other hand, use the logit-based method to detect false concepts in LLMsâ responses with high un- certainty. They then fix such content with auxil- iary retrieval-augmented LLMs. | 2309.01219#77 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 78 | Besides, Zhao et al. (2023b) present a Pareto optimal self-supervision framework. This frame- work utilizes available programmatic supervision to assign a risk score to LLM responses, which can serve as an indicator of hallucinations. Luo et al. (2023a) introduce a pre-detection self-evaluation technique, which aims to evaluate the familiarity of LLMs with the concepts in user prompts and prevent the generation of content about those un- familiar concepts. | 2309.01219#78 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 79 | Summary & Discussion. Exploiting uncer- tainty to identify and mitigate LLM hallucinations is a promising research direction today. Three pri- mary approaches exist for estimating the uncer- tainty of LLMs, each presenting its unique chal- lenges. Firstly, the logit-based method is becom- ing less applicable for modern commercial LLMs as they are usually closed-source and black-box, rendering their output logits inaccessible. Sec- ondly, regarding the verbalize-based method, re- searchers have observed that LLMs tend to display a high degree of overconfidence when expressing their confidence (Xiong et al., 2023). Thirdly, the effective measurement of the consistency of differ- ent responses remains an unresolved issue in the consistency-based method (Manakul et al., 2023). We believe that leveraging uncertainty is crucial in developing trustworthy LLMs and encourage fu- ture research to address the aforementioned chal- lenges in this field.
User Input Multi-Agent Interaction © I see your point, but ... a) Most of your claims are right, but ... = Which musical currently holds the record as Broadway's fourth-longest running show? The musical âChicagoâ holds the record as Broadway's fourth-longest running show. Final Response As of September 2021, the musical âWicked" holds the record as Broadway's fourth-longest running show. | 2309.01219#79 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 81 | In addition to the above approaches, other tech- niques demonstrating the potential for reducing hallucinations are shown below.
Multi-agent interaction. Some recent research has sought to address the hallucination problem in LLMs from a multi-agent perspective, wherein multiple LLMs (also known as agents) indepen- dently propose and collaboratively debate their re- sponses to reach a single consensus, as exempli- fied in Figure 6. Du et al. (2023) is a pioneer- ing work in this line. They initially developed a benchmark for assessing the factual accuracy of prominent computer scientist biographies gener- ated by LMs. Their findings reveal that an indi- vidual LLM can easily generate hallucinated in- formation within this benchmark; however, such hallucinations can be mitigated by engaging mul- tiple LLMs in a debate to achieve consensus. Be- sides, Cohen et al. (2023) ask one LLM to gen- erate claims (acting as EXAMINEE) and another to raise questions about these claims and check the truthfulness of them (acting as EXAMINER). Wang et al. (2023d) instead propose prompting a single LLM to identify, simulate, and iteratively self-collaborate with multiple personas, such as Harry Potter Fan and Jay Chou Fan. By leverag- ing an LLM as a cognitive synergist, it effectively reduces hallucinations with relatively low costs. | 2309.01219#81 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 82 | 18
Prompt engineering. Existing research high- lights that the behavior of LLMs can significantly vary based on the prompts given by users (Si et al., 2022; Zhu et al., 2023). In terms of hallucina- tion, users may encounter an LLM that initially responds accurately but begins to hallucinate in- formation when using different prompts. In light of this observation, Zhang et al. (2023a) endeav- our to engineer more effective prompts to mitigate hallucination. Concretely, they employ the chain- of-thought prompt (Wei et al., 2022) to compel LLMs to generate reasoning steps before provid- ing the final answers. However, chain-of-thought may introduce some new challenges. The po- tential of hallucinated reasoning steps is one of them. Furthermore, a popular practice nowadays involves explicitly instructing LLMs not to dis- seminate false or unverifiable information when designing the âsystem promptâ, i.e., the special messages used to steer the behavior of LLMs. The following system prompt used for Llama 2- Chat (Touvron et al., 2023b) exemplifies this ap- proach: If you donât know the answer to a ques- tion, please donât share false information. | 2309.01219#82 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 83 | Analyzing LLMsâ internal states. Azaria and Mitchell (2023) contend that LLMs may be aware of their own falsehoods, implying that their in- ternal states could be utilized to detect halluci- nations. They propose Statement Accuracy Pre- diction based on Language Model Activations (SAPLMA), which adds a classifier on top of each hidden layer of the LLM to determine truthful- ness. Experimental results indicate that LLMs might âknowâ when the statements they gener- ate are false, and SAPLMA can effectively ex- tract such information. The Inference-Time In- tervention (ITI) method (Li et al., 2023b) is also grounded in a similar hypothesis. They further shift model activations alongside factuality-related heads during inference and discover that this can mitigate hallucinations. These studies suggest that âthe hallucination within LLMs may be more a re- sult of generation techniques than the underlying representationâ (Agrawal et al., 2023). | 2309.01219#83 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 84 | Human-in-the-loop. Zhang et al. (2023c) posit that a potential cause of hallucination in LLMs could be the misalignment between knowledge and user questions, a phenomenon that is par- ticularly prevalent in the context of retrieval- augmented generation (RAG). To address this isUser Input (EN) LLM Response (EN) According to the 2020 United States Census, the population of Denver, Colorado, is 715,522 â What is the population of Denver according to the 2020 census? æ ¹æ®2020å¹´äººå£æ®æ¥ï¼ä¸¹ä½çäººå£æ¯å¤å°ï¼ User Input (ZH) LLM Response (ZH) æ ¹æ®2020å¹´äººå£æ®æ¥ï¼ä¸¹ä½ç人å£ä¸º73,921 â User Input (EN) LLM Response (ZH) æ ¹æ®2020å¹´äººå£æ®æ¥ï¼ä¸¹ä½ç人å£ä¸º704,621 â What is the population of Denver according to the 2020 census? Answer in Chinese. | 2309.01219#84 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 85 | Table 11: A real example in which ChatGPT (July 2023 Version) accurately answered a question in English conver- sation but presented hallucinations for the same question when communicating in Chinese (the correct population of Denver in 2020 is 715,522, according to https://en.wikipedia.org/wiki/Denver).
sue, they introduce MixAlign, a human-in-the- loop framework that utilizes LLMs to align user queries with stored knowledge, and further en- courages users to clarify this alignment. By re- fining user queries iteratively, MixAlign not only reduces hallucinations but also enhances the qual- ity of the generated content.
discrimination-style benchmark (Li et al., 2023a; Muhlgay et al., 2023) could relatively accurately evaluate a modelâs ability to distinguish hallucina- tions, the relationship between discrimination per- formance and generation performance is still un- clear until now. These issues all need more in- depth exploration. | 2309.01219#85 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 86 | Optimizing model architecture. Several stud- ies have explored modifying the architecture of LMs to mitigate hallucinations. Examples in- clude the multi-branch decoder (Rebuffel et al., 2022) and the uncertainty-aware decoder (Xiao and Wang, 2021). Li et al. (2023g) suggest em- ploying a bidirectional autoregressive architecture in the construction of LLMs, which enables lan- guage modeling from both left-to-right and right- to-left. They claim that this design strategy could contribute to the reduction of hallucinations by ef- fectively leveraging bidirectional information.
# 6 Outlooks
In this section, we discuss a few unresolved chal- lenges in the investigation of hallucinations within LLMs and offer our insights into potential future research directions. | 2309.01219#86 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
2309.01219 | 87 | # 6 Outlooks
In this section, we discuss a few unresolved chal- lenges in the investigation of hallucinations within LLMs and offer our insights into potential future research directions.
Reliable evaluation. Although considerable ef- fort has been dedicated to building evaluation benchmarks for quantitatively assessing halluci- nation in LLMs, there are still issues that need to be solved. The automatic evaluation in the generation-style hallucination benchmark cannot accurately reflect the performance or align with human annotation. Such inaccuracy is reflected in two ways: (1) The automatic metric does not perfectly align with human annotations (Lin et al., 2021; Min et al., 2023; Muhlgay et al., 2023); (2) The reliability of automatic metric varies across texts from different domains or generated by dif- ferent LLMs (Min et al., 2023), resulting in re- duced robustness for generalization. Although the | 2309.01219#87 | Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models | While large language models (LLMs) have demonstrated remarkable capabilities
across a range of downstream tasks, a significant concern revolves around their
propensity to exhibit hallucinations: LLMs occasionally generate content that
diverges from the user input, contradicts previously generated context, or
misaligns with established world knowledge. This phenomenon poses a substantial
challenge to the reliability of LLMs in real-world scenarios. In this paper, we
survey recent efforts on the detection, explanation, and mitigation of
hallucination, with an emphasis on the unique challenges posed by LLMs. We
present taxonomies of the LLM hallucination phenomena and evaluation
benchmarks, analyze existing approaches aiming at mitigating LLM hallucination,
and discuss potential directions for future research. | http://arxiv.org/pdf/2309.01219 | Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi | cs.CL, cs.AI, cs.CY, cs.LG | work in progress; 32 pages | null | cs.CL | 20230903 | 20230924 | [
{
"id": "2307.03109"
},
{
"id": "2306.05424"
},
{
"id": "2305.20050"
},
{
"id": "2308.06394"
},
{
"id": "2306.16564"
},
{
"id": "2307.03917"
},
{
"id": "2305.11747"
},
{
"id": "2305.10355"
},
{
"id": "2308.07317"
},
{
"id": "2308.10168"
},
{
"id": "2305.06311"
},
{
"id": "2307.10169"
},
{
"id": "2307.15043"
},
{
"id": "2301.00234"
},
{
"id": "2305.03268"
},
{
"id": "2307.15343"
},
{
"id": "2303.16104"
},
{
"id": "2309.03118"
},
{
"id": "2307.11768"
},
{
"id": "2309.09558"
},
{
"id": "2305.13300"
},
{
"id": "2211.05100"
},
{
"id": "2305.14627"
},
{
"id": "2305.19187"
},
{
"id": "2004.00345"
},
{
"id": "2307.13528"
},
{
"id": "2210.09150"
},
{
"id": "2307.04964"
},
{
"id": "2203.05115"
},
{
"id": "2309.05936"
},
{
"id": "2305.11738"
},
{
"id": "2306.09296"
},
{
"id": "2309.02654"
},
{
"id": "2305.14795"
},
{
"id": "2305.14325"
},
{
"id": "2203.03802"
},
{
"id": "2305.14623"
},
{
"id": "2309.05463"
},
{
"id": "2308.10792"
},
{
"id": "2307.10236"
},
{
"id": "2302.13971"
},
{
"id": "2308.11764"
},
{
"id": "2309.11064"
},
{
"id": "2305.13281"
},
{
"id": "2306.03341"
},
{
"id": "2112.09332"
},
{
"id": "2307.01379"
},
{
"id": "2309.08594"
},
{
"id": "2304.05613"
},
{
"id": "2303.15621"
},
{
"id": "2301.12652"
},
{
"id": "2307.06908"
},
{
"id": "2307.02483"
},
{
"id": "2304.14178"
},
{
"id": "2305.13534"
},
{
"id": "2303.12528"
},
{
"id": "2306.13063"
},
{
"id": "2305.18248"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "2005.03642"
},
{
"id": "2306.05212"
},
{
"id": "2305.13269"
},
{
"id": "2305.14908"
},
{
"id": "2307.11019"
},
{
"id": "2307.00360"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2305.14002"
},
{
"id": "2303.18223"
},
{
"id": "2307.03172"
},
{
"id": "2307.03987"
},
{
"id": "2301.09785"
},
{
"id": "2302.04023"
},
{
"id": "2210.07229"
},
{
"id": "2307.05300"
},
{
"id": "2306.04528"
},
{
"id": "2305.01651"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.13669"
},
{
"id": "2307.06290"
},
{
"id": "2304.01196"
},
{
"id": "2109.07958"
},
{
"id": "2309.03883"
},
{
"id": "2302.07842"
},
{
"id": "2307.01850"
},
{
"id": "2305.14251"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2304.07327"
},
{
"id": "2308.04215"
},
{
"id": "2306.14565"
},
{
"id": "2307.15337"
},
{
"id": "2308.03958"
},
{
"id": "2306.04751"
},
{
"id": "2302.00083"
},
{
"id": "2305.16355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13068"
},
{
"id": "2207.05221"
},
{
"id": "2307.09288"
},
{
"id": "2202.01110"
},
{
"id": "2307.13702"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2305.12740"
},
{
"id": "2309.11495"
},
{
"id": "2305.15852"
},
{
"id": "2303.08896"
},
{
"id": "2305.00955"
},
{
"id": "2304.10513"
},
{
"id": "2201.05273"
},
{
"id": "2307.08701"
},
{
"id": "2205.11482"
},
{
"id": "2305.04757"
},
{
"id": "2304.13734"
},
{
"id": "2304.03442"
},
{
"id": "2212.09251"
},
{
"id": "2305.14739"
},
{
"id": "2305.18153"
},
{
"id": "2211.14876"
},
{
"id": "2303.11315"
},
{
"id": "2305.11206"
},
{
"id": "2307.16877"
},
{
"id": "2302.12813"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.