id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2309.02427#157
Cognitive Architectures for Language Agents
191, 1972. L. Wong, G. Grand, A. K. Lew, N. D. Goodman, V. K. Mansinghka, J. Andreas, and J. B. Tenenbaum. From word models to world models: Translating from natural language to the probabilistic language of thought. arXiv preprint arXiv:2306.12672, 2023. R. E. Wray, J. R. Kirk, J. E. Laird, et al. Language models as a knowledge source for cognitive agents. arXiv preprint arXiv:2109.08270, 2021.
2309.02427#156
2309.02427#158
2309.02427
[ "2305.14909" ]
2309.02427#158
Cognitive Architectures for Language Agents
Q. Wu, G. Bansal, J. Zhang, Y. Wu, S. Zhang, E. Zhu, B. Li, L. Jiang, X. Zhang, and C. Wang. Autogen: En- abling next-gen llm applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155, 2023. T. Wu, E. Jiang, A. Donsbach, J. Gray, A. Molina, M. Terry, and C. J. Cai.
2309.02427#157
2309.02427#159
2309.02427
[ "2305.14909" ]
2309.02427#159
Cognitive Architectures for Language Agents
Promptchainer: Chaining large language model prompts through visual programming. In CHI Conference on Human Factors in Computing Systems Extended Abstracts, pages 1â 10, 2022a. T. Wu, M. Terry, and C. J. Cai. AI chains: Transparent and controllable human-AI interaction by chaining large language model prompts. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pages 1â 22, 2022b. Z. Xi, W. Chen, X. Guo, W. He, Y. Ding, B. Hong, M. Zhang, J. Wang, S. Jin, E. Zhou, et al.
2309.02427#158
2309.02427#160
2309.02427
[ "2305.14909" ]
2309.02427#160
Cognitive Architectures for Language Agents
The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864, 2023. Y. Xie, T. Xie, M. Lin, W. Wei, C. Li, B. Kong, L. Chen, C. Zhuo, B. Hu, and Z. Li. Olagpt: Empowering llms with human-like problem-solving abilities. arXiv preprint arXiv:2305.16334, 2023. B. Xu, X. Liu, H. Shen, Z. Han, Y. Li, M. Yue, Z. Peng, Y. Liu, Z. Yao, and D. Xu.
2309.02427#159
2309.02427#161
2309.02427
[ "2305.14909" ]
2309.02427#161
Cognitive Architectures for Language Agents
Gentopia: A collaborative platform for tool-augmented llms. arXiv preprint arXiv:2308.04030, 2023a. B. Xu, Z. Peng, B. Lei, S. Mukherjee, Y. Liu, and D. Xu. Rewoo: Decoupling reasoning from observations for efficient augmented language models. arXiv preprint arXiv:2305.18323, 2023b. B. Xu, A. Yang, J. Lin, Q. Wang, C. Zhou, Y. Zhang, and Z. Mao.
2309.02427#160
2309.02427#162
2309.02427
[ "2305.14909" ]
2309.02427#162
Cognitive Architectures for Language Agents
ExpertPrompting: Instructing Large Language Models to be Distinguished Experts. arXiv preprint arXiv:2305.14688, 2023c. J. Yang, A. Prabhakar, K. Narasimhan, and S. Yao. Intercode: Standardizing and benchmarking interactive coding with execution feedback. arXiv preprint arXiv:2306.14898, 2023. S. Yao and K. Narasimhan.
2309.02427#161
2309.02427#163
2309.02427
[ "2305.14909" ]
2309.02427#163
Cognitive Architectures for Language Agents
Language agents in the digital world: Opportunities and risks. princeton- nlp.github.io, Jul 2023. URL https://princeton-nlp.github.io/language-agent-impact/. S. Yao, R. Rao, M. Hausknecht, and K. Narasimhan. Keep CALM and explore: Language models for action generation in text-based games. arXiv preprint arXiv:2010.02903, 2020.
2309.02427#162
2309.02427#164
2309.02427
[ "2305.14909" ]
2309.02427#164
Cognitive Architectures for Language Agents
29 S. Yao, H. Chen, J. Yang, and K. Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. Advances in Neural Information Processing Systems, 35:20744â 20757, 2022a. S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022b.
2309.02427#163
2309.02427#165
2309.02427
[ "2305.14909" ]
2309.02427#165
Cognitive Architectures for Language Agents
S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y. Cao, and K. Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023. E. Zelikman, Y. Wu, J. Mu, and N. Goodman. STaR:
2309.02427#164
2309.02427#166
2309.02427
[ "2305.14909" ]
2309.02427#166
Cognitive Architectures for Language Agents
Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems, 35:15476â 15488, 2022. A. Zeng, M. Attarian, B. Ichter, K. Choromanski, A. Wong, S. Welker, F. Tombari, A. Purohit, M. Ryoo, V. Sindhwani, et al. Socratic models: Composing zero-shot multimodal reasoning with language. arXiv preprint arXiv:2204.00598, 2022. C. Zhang, L. Wong, G. Grand, and J. Tenenbaum.
2309.02427#165
2309.02427#167
2309.02427
[ "2305.14909" ]
2309.02427#167
Cognitive Architectures for Language Agents
Grounded physical language understanding with probabilistic programs and simulated worlds. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 45, 2023a. T. Zhang, F. Liu, J. Wong, P. Abbeel, and J. E. Gonzalez. The wisdom of hindsight makes language models better instruction followers. arXiv preprint arXiv:2302.05206, 2023b. Y. Zhang, S. Sun, M. Galley, Y.-C. Chen, C. Brockett, X. Gao, J. Gao, J. Liu, and W. B. Dolan. Dialogpt: Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:
2309.02427#166
2309.02427#168
2309.02427
[ "2305.14909" ]
2309.02427#168
Cognitive Architectures for Language Agents
System Demonstrations, pages 270â 278, 2020. W. J. Zhao, R. Richie, and S. Bhatia. Process and content in decisions from memory. Psychological Review, 129(1):73, 2022. V. Zhong, A. W. Hanjie, S. Wang, K. Narasimhan, and L. Zettlemoyer. SILG: The Multi-domain Symbolic Interactive Language Grounding Benchmark. Advances in Neural Information Processing Systems, 34: 21505â 21519, 2021. C. Y. Zhou, D. Talmi, N. Daw, and M. G. Mattar.
2309.02427#167
2309.02427#169
2309.02427
[ "2305.14909" ]
2309.02427#169
Cognitive Architectures for Language Agents
Episodic retrieval for model-based evaluation in sequential decision tasks, 2023a. H. Zhou, M. Huang, T. Zhang, X. Zhu, and B. Liu. Emotional chatting machine: Emotional conversation In Proceedings of the AAAI Conference on Artificial generation with internal and external memory. Intelligence, volume 32, 2018. S. Zhou, U. Alon, F. F. Xu, Z. Jiang, and G.
2309.02427#168
2309.02427#170
2309.02427
[ "2305.14909" ]
2309.02427#170
Cognitive Architectures for Language Agents
Neubig. Docprompting: Generating code by retrieving the docs. In The Eleventh International Conference on Learning Representations, 2022a. S. Zhou, F. F. Xu, H. Zhu, X. Zhou, R. Lo, A. Sridhar, X. Cheng, Y. Bisk, D. Fried, U. Alon, et al. WebArena: A Realistic Web Environment for Building Autonomous Agents. arXiv preprint arXiv:2307.13854, 2023b.
2309.02427#169
2309.02427#171
2309.02427
[ "2305.14909" ]
2309.02427#171
Cognitive Architectures for Language Agents
Y. Zhou, A. I. Muresanu, Z. Han, K. Paster, S. Pitis, H. Chan, and J. Ba. Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910, 2022b. 30
2309.02427#170
2309.02427
[ "2305.14909" ]
2309.01660#0
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
# Unveiling theory of mind in large language models: A parallel to single neurons in the human brain Mohsen Jamali1, Ziv M. Williams1,2,3*, Jing Cai1*â 1 Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA. 2 Harvard-MIT Division of Health Sciences and Technology, Boston, MA. 3 Harvard Medical School, Program in Neuroscience, Boston, MA. Senior co-authors â Correspondence should be sent to [email protected] # Abstract
2309.01660#1
2309.01660
[ "2302.02083" ]
2309.01660#1
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
With their recent development, large language models (LLMs) have been found to exhibit a certain level of Theory of Mind (ToM), a complex cognitive capacity that is related to our conscious mind and that allows us to infer anotherâ s beliefs and perspective. While human ToM capabilities are believed to derive from the neural activity of a broadly interconnected brain network, including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise processes underlying LLMâ s capacity for ToM or their similarities with that of humans remains largely unknown. In this study, we drew inspiration from the dmPFC neurons subserving human ToM and employed a similar methodology to examine whether LLMs exhibit comparable characteristics. Surprisingly, our analysis revealed a striking resemblance between the two, as hidden embeddings (artificial neurons) within LLMs started to exhibit significant responsiveness to either true- or false-belief trials, suggesting their ability to represent anotherâ
2309.01660#0
2309.01660#2
2309.01660
[ "2302.02083" ]
2309.01660#2
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
s perspective. These artificial embedding responses were closely correlated with the LLMsâ performance during the ToM tasks, a property that was dependent on the size of the models. Further, the otherâ s beliefs could be accurately decoded using the entire embeddings, indicating the presence of the embeddingsâ ToM capability at the population level. Together, our findings revealed an emergent property of LLMsâ embeddings that modified their activities in response to ToM features, offering initial evidence of a parallel between the artificial model and neurons in the human brain.
2309.01660#1
2309.01660#3
2309.01660
[ "2302.02083" ]
2309.01660#3
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
# Introduction In recent years, the rapid evolution of Large Language Models (LLMs) has opened a new era of machine intelligence (1, 2). Beyond their remarkable power in language generation, these LLMs have exhibited certain level of competencies across diverse domains, including conversation, code generation, basic mathematical calculation, logical reasoning, and problem-solving tasks (3-7). Particularly intriguing is their emerged capacity to engage in Theory of Mind (ToM), a cognitive ability essential for attributing mental states and understanding the perspectives of others (8, 9). Notably, recent research has shown LLMs that are capable of archieving ToM skills comparable to those of seven-year-olds (10). Although other researchers raise questions about the extent to which large language models can comprehend and simulate theory of mind (11-13), it is evident that LLMs have achieved a level of ToM capability that far surpasses the capabilities of earlier, smaller-scale language models (10). Theory of mind is a critical cognitive ability through which humans create intricate mental representations of other agents and comprehend that these agents may possess intentions, beliefs or actions differently from oneâ s own or the objective reality (8, 9). A critical test for ToM is the false belief task, which evaluates whether one can recognize that someone may hold an invalid belief that diverges from reality after a change to the environment that they did not witness (14- 16). For example, a person might believe an apple is still on the tree if that person did not witness the apple falling. Over the past few decades, human brain imaging studies have provided substantial evidence for the brain network that supports our ToM ability, including the temporal- parietal junction, superior temporal sulcus and the dorsal medial prefrontal cortex (dmPFC) (17- 20). Recently, our research has revealed a detailed single neuronal process in the human dmPFC for representing otherâ s beliefs and identified candidate neurons that could support ToM (9). Nevertheless, it remains to be seen whether there exist any parallel for the neural activities associated with human theory of mind in large language models. Here, we employed a similar methodology employed in human (9) to examine the relationship between single neurons in the human brain and the embeddings in the LLM substructures.
2309.01660#2
2309.01660#4
2309.01660
[ "2302.02083" ]
2309.01660#4
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
We aim to begin studying whether and what processes may commonly subserve ToM ability, how they align with task performance, and how they precisely relate to network structure and size. Utilizing open-source LLMs, our initial approach involved a detailed evaluation across multiple ToM tasks, with task materials closely resembling those provided to human participants. Building on these comparisons, we then explored what specific aspects of the hidden embeddings correlated with task performance and the ability of the LLM models to accurately discern false from true beliefs. These results were then compared to those previously obtained from single neurons within the human brain. Finally, we verified our findings by conducting decoding analysis to directly predict the otherâ s beliefs from hidden embeddings. These analyses, in combination, provide insight into how LLMs achieve high-level ToM capabilities, how the hidden network processes involved, and how these compare to those of native biological neurons processing the same precise tasks.
2309.01660#3
2309.01660#5
2309.01660
[ "2302.02083" ]
2309.01660#5
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
# Results # Large language modelsâ performances on theory of mind questions To first evaluate the capacity of LLMs for ToM, we used four independently trained, open- source LLMs: Falcon (21, 22), LLaMa (23), Pythia (24) and GPT-2 models (25). Among them, Falcon and LLaMa exhibited remarkable performance among the open-sourced models, as demonstrated by their rankings on the Huggingface leaderboard (26). Each tested LLM encompassed multiple versions with various numbers of hidden layers and parameters, and fine- tuned on multiple datasets, as summarized in Table 3. These variations of a model group spanned a broad range of the model performance on language tasks, forming a comprehensive collection of models exhibiting linguistic capabilities. We initially assessed these modelsâ ability in performing theory of mind tasks using the same time-aligned materials obtained from neuronal recordings as well as how performance was precisely affected by LLM size (Table 1) (9). Each model underwent independent evaluation through a series of trials comprising a scenario statement followed by two corresponding questions. The statements were designed in pairs with a true belief trial and a false belief trial based on whether the agentâ s belief matched the reality or not (Fig. 1A, Table 1). For example, the statement may provide the scenario â
2309.01660#4
2309.01660#6
2309.01660
[ "2302.02083" ]
2309.01660#6
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
Ned and you take a photo of an apple on a tree. While the photo develops, Ned leaves and is unaware that a wind blows the apple to ground.â Since Nedâ s belief on the location of the apple is different from the reality, this is a false-belief trial. In comparison, a true-belief trial included a statement that Nedâ s belief is the same as reality (Fig. 1A). The statements were followed by two questions, one relating to the belief of the agent in the scenario statement (i.e., â beliefâ question) and the other concerning the physical state of reality (i.e., â factâ question).
2309.01660#5
2309.01660#7
2309.01660
[ "2302.02083" ]
2309.01660#7
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
To obtain plausible responses from models with different language capabilities, we formulated ToM questions by presenting partial sentences that would guide the predicted word towards being the answer (Fig. 1A), and compared the predicted probabilities of the possible words (â treeâ or â groundâ in this example) to assess whether the correct answer had higher probability than the other (details in Methods). Together, our task material is composed of 76 trials. The lengths of the statement varied between 81 words to 191 words, with an average of 125 words. Overall, we find the tested LLMs had higher accuracies when asked about the facts and othersâ beliefs in true-belief trials compared to the false-belief trials (Fig. 1B, C). Specifically, the accuracies of the predicted answers for the belief questions from the true-belief trials by different LLMs reached an average of 68% (50% chance performance; ranged from 56% to 77%), which was similar to the prediction accuracies on the fact questions (ranged from 55% to 79% with an average of 70%). The false-belief accuracies were lower, by contrast, with an average of only 52% (ranged from 26% to 69%). For these trials particularly, larger models (model parameters 12b) performed significantly better than smaller models ( 7b, T-test, statistics = 2.88, p = 0.01), with LLaMa-33b model showing the highest accuracy at 69%. In comparison, smaller models showed accuracies lower or similar to chance level. Therefore, although most models exhibited high accuracies to questions about facts or in true-belief trials, only large models showed high accuracies in response to other-belief questions in false-belief trials. To ensure that the observed accuracies did not independently originate by any clues outside of the scenarios in the statements, we performed the following controls. Firstly, we input each model with the same questions as before, but here we excluded the preceding statements. This control condition therefore allowed us to assess whether factors such as imbalanced word frequencies or linguistic information within the questions could account for the high accuracies.
2309.01660#6
2309.01660#8
2309.01660
[ "2302.02083" ]
2309.01660#8
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
We found that the question-only tests, however, returned an average accuracy of 47% for all models (i.e., chance-level accuracy), with the larger models showing similar performance as the smaller models (T-test, statistics = -0.98, p = 0.34). Secondly, to examine whether the high accuracies may be accounted by factors unrelated to the content of the statement, we randomly permutated words from the statements for each true and false belief trial (Methods, Table 2). This resulted in an average accuracy of 55% for all models, and there was no difference between the large and small models for the false belief questions (T-test, statistics = -1.94, p = 0.07). Therefore, these control conditions provided additional confirmation that the remarkable performance of the large models depended on the content of the statements, ruling out explanations based on random factors or word frequency alone.
2309.01660#7
2309.01660#9
2309.01660
[ "2302.02083" ]
2309.01660#9
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
# Embeddingsâ selectively tuned to true and false beliefs Within human cognition, ToM performance is thought to be supported by a vast network of interconnected neurons that presumably function together to form representations about anotherâ s beliefs. Our recent study has identified single neurons in the dorsal medial prefrontal cortex that exhibit selective modulations for true- versus false-belief trials during the period of questions, suggesting a particular role for processing othersâ beliefs and potentially subserving ToM ability (9). Here, we obtained data from single-neuronal recordings from human subjects as they performed a structured false-belief task. Out of 212 recorded human neurons, 49 (23%) displayed significant changes in activities for true- or false-belief trials when human participants performed ToM tasks (Fig. 2A). That is, these neurons displayed a consistent difference in their firing rates when the otherâ s beliefs were true compared to when the otherâ s beliefs were false.
2309.01660#8
2309.01660#10
2309.01660
[ "2302.02083" ]
2309.01660#10
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
These neurons therefore reliably changed their activities in relation to the otherâ s beliefs despite variations in the specific statements and scenarios within each trial type, providing evidence for the specific tuning of human neurons to ToM computations. To investigate whether the artificial modelsâ theory of mind capability shared similar mechanisms as in the human brain, we performed element-wise analysis using the following procedures: Firstly, to obtain the activities of â artificial neuronsâ in LLMs, we used hidden embeddings (output of transformer modules) from all layers as well as the input to the first transformer module. Thus, for example, instead of using the firing rate values for each neuron to determine their response selectivity to false versus true beliefs, we used the embedding values for each node in the network (Methods). Secondly, to establish a meaningful comparison with human neurons, we employed ToM task materials for LLM closely aligned to the one we tested on humans. Here, we used the same statements as in model evaluation, with trials grouped into pairs of true and false belief trials, and asked a belief question following the statement (Fig. 2A, Table 1, Method). These questions were exactly the same for each pair, but the answer depended on the information in the statements which defined the trial types. We modified the statements so that each true-false-belief pair contained similar number of words to minimize any effect caused by variations of word counts. Finally, we input the model with the concatenation of the statement and the question as one batch and only examined the embeddings from the tokens within the questions (detailed explanation in Method). We then examined whether embeddings showed significant differences in values between true- and false-belief trials using a Mann Whitney U test. Thus, if an embedding encoded no ToM attributes and solely reflected the literal wording information (which was very similar within each pair) or had no memory of the statements, it would result in similar values between the pair of the trials.
2309.01660#9
2309.01660#11
2309.01660
[ "2302.02083" ]
2309.01660#11
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
Together, the LLM modelâ s hidden embeddings can be thought of, in turn, as the activities of artificial neurons across all network layers that vary in relation to the task and trial-aligned input. Using this approach, we indeed observed the presence of embeddings with significant responses corresponding to the different trial types. The percentage of the modulated embeddings varied across models and the layers (Fig. 2B-D). For example, in the Falcon 40b model, we found 6.3% of significant embeddings in layer 25, which represented the highest percentage among the layers. These embeddings showed either increased or decreased activities for true- versus false- belief trials (Fig. 2B). By contrast, there was no responsive embedding from the input layer up to the layer 8 in this model (Fig. 2D left, right inset). A similar pattern was observed in the LLaMa- 30b models (Fig. 2D left, middle inset), in which 5.6% of embeddings at 19th layer exhibited selectivity to trial types, and very few were responsive from the input up to the 9th layer. This trend of significant artificial neurons present in the middle and high layers was consistent across models. Next, we assessed the percentage of embeddings displaying significant selectivity from various models by using the percentage from the layer with the highest percentage of each model. In general, the percentage of significant embeddings increased with the model size (Fig. 2D left). For large models ( 12b), there was an average of 3.9% of embeddings responding to ToM tasks, and this percentage dropped to 0.6% for smaller models (T test, statistics = -4.6, p = 4 x 10-4). Collectively, the percentage of significant embeddings were also closely correlated to the model performance (Fig. 2D right). For models with above-chance performance, the percentage of ToM-responsive embeddings increased non-linearly, with an exponential relation between the percentage and the performance (percentage = ð
2309.01660#10
2309.01660#12
2309.01660
[ "2302.02083" ]
2309.01660#12
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
exp (ð â performance) , where ð = 0.01 2.1 x 10-5, ð = 6.1 4.4). Together, our findings revealed the presence of embeddings that displayed modulations related to the theory of mind content in multiple large models, a feature that was absent in smaller models with chance-level false-belief performance. Finally, to ensure the above findings cannot be explained by random fluctuation or other features unrelated to the ToM information in the statements, we conducted a control experiment by randomly permuting words in the statements. We then applied the same criterion to select responding embeddings. We found that the percentages were significantly lower compared to those resulted from the intact statements for large models (T-test, statistic = 4.1, p = 0.002) but not for small models (T-test, statistic = 1.46, p = 0.16). These, together, indicated that the presence of ToM responsive neurons in the large models cannot be explained by clues unrelated to the contextual information in the scenario statements. Therefore, although the percentage of ToM artificial neurons were considerably lower than those observed in the human brain (23%), there was an emergence of â artificialâ neurons in middle and high layers of the large LLMs that responded to ToM features. # True and false beliefs can be decoded from the entire embeddings Next, to further investigate the relationships between the hidden embeddings and the modelsâ ToM capability, we examined whether othersâ beliefs (i.e., true vs false beliefs) can be directly decoded from the population of hidden embeddings. Specifically, we used all dimensions of embeddings derived from each layer within a given model, and trained a logistic regression with L2 regularization to predict the trial types for trials that were not in the training dataset (details in Methods). Here, we find a majority of the true- and false-belief trial types were accurately decoded using the entire hidden embeddings from the 25th layer of the Falcon 40b model (Fig. 3A top). Furthermore, the activities of significant neurons exhibited far greater discrimination between false and true belief trials in correctly decoded trials compared to incorrectly decoded trials (average z-scored differences were 0.60 and 0.25, respectively; T-test, statistic = 17.9, p = 1.6 x 10-62, Fig. 3A bottom).
2309.01660#11
2309.01660#13
2309.01660
[ "2302.02083" ]
2309.01660#13
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
Together, the activities of these artificial neurons therefore appeared to be predictive of the modelâ s ToM performance. Examining all models together, the decoding accuracies increased with the size of the models, with large models ( 12b) showing an average of 75% decoding accuracy. The Falcon-40b model showed the highest decoding accuracy of 81%. The embeddings in smaller models ( 7b), however, could only predict the trial types at an average accuracy of 67%, which was significantly lower than those from the large models (T-test, statistic = -4.2, p = 0.001). This observation was also consistent with the ratio of responding neurons, together suggesting a relation between the size of the models and the proportion of artificial neurons capable of accurately predicting the otherâ
2309.01660#12
2309.01660#14
2309.01660
[ "2302.02083" ]
2309.01660#14
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
s beliefs. Finally, to ensure that the decoding accuracies were not originated from factors unrelated to the scenario statements, we randomly permuted the words in each pair of the statements and repeated the same decoding procedures to decode the trial type (Methods). Here, the decoding accuracies from all models dropped to an average of only 55%, which was significantly lower than all accuracies without the random permutation (T-test, p < 3 x 10-110). The differences of accuracies between the intact and permuted control were higher for large models, with an average of 19%. These findings showed that the ToM trial types can be robustly decoded from the population of artificial neurons (embeddings), indicating a consistent encoding of ToM features by the embeddings. Together with the results from individual embedding, our results collectively support the hypothesis that hidden embeddings possess the capacity to effectively predict the otherâ s beliefs, suggesting their role in facilitating the modelsâ
2309.01660#13
2309.01660#15
2309.01660
[ "2302.02083" ]
2309.01660#15
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
ToM performance. # Discussion The ability to discern between true and false beliefs represents a significant aspect of theory of mind that is proposed to be linked to our conscious mind (27, 28). Recent advancements in large language models (LLMs) have revealed their potentials in distinguishing objective reality from false beliefs (10, 12). Our study aims to provide an initial investigation to the possible mechanisms underlying ToM in LLMs. By analyzing hidden embeddings from various open- source LLMs, we uncovered the presence of hidden embeddings that were predictive of the beliefs of others across richly varied scenarios. This finding is particularly remarkable, considering that the embeddings were derived from identical questions following narratives with very similar wording. This suggests the models' ability to not only differentiate subtle variations among closely related sentences, but also categorize them based on true and false beliefs, thereby encoding the perspective of others. These responses were absent when we randomly permuted the words in statements while keeping the questions intact. Additionally, the trial types (i.e., true- or false-belief) were accurately decoded from the population of embeddings, further validating the robust representation of ToM within the artificial models. Finally, we observed a strong and positive relation between the task performance and the proportion of ToM-responsive embeddings, suggesting their role in facilitating the performance. Collectively, our findings indicate an emergence of ToM-related embeddings in the artificial models, supporting the model capability in capturing essential aspects of ToM. Although, unlike humans, LLMs were trained solely on language materials and lacked rich resources by which humans develop ToM capability (29, 30), the emergent behavior observed in the artificial models bears a striking resemblance to the neuronal activity associated with ToM in the human brain. With hidden embeddings as counterparts of brain neurons, both systems contain neurons that directly respond to the perspective of others. We showed that a substantial proportion of artificial neurons that responded selectively to true- or false-belief trials, mirroring prefrontal neurons in humans exhibiting changes in firing rates for different trial types (9). Furthermore, the LLM layers with high percentages of ToM-responding embeddings were consistently not confined to one or two layers or distributed randomly. Rather, they showed a peak in the middle and high layers and almost zero in the input layers.
2309.01660#14
2309.01660#16
2309.01660
[ "2302.02083" ]
2309.01660#16
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
A similar distributed areas for ToM were observed in human brain, particularly within areas of the frontal, temporal and parietal cortices (9, 17-20), which have been identified as regions for high-level cognitive processing. ToM-related activity within lower input processing areas such as occipital lobe is minimal. Finally, we observed the artificial layers exhibiting ToM responses were located in contiguous layers, analogous to the highly interconnected structure of ToM brain areas. Altogether, these observations are remarkable because humans rely on many years of development and real-world social interactions with others to form ToM capability (29, 30). The LLMs tested here, by comparison, are largely trained on vast language corpora with no explicit experience in interacting with others or direct representation of agency. Yet, despite significant structural and algorithmic differences between the artificial and brain networks, they indeed exhibit surprising convergence by adopting similar mechanism of encoding ToM information. This convergence is evident both in their capability to differentiate true and false beliefs and in the emergence of ToM-related neurons that facilitate such cognitive functions. Collectively, these results shed light on the potential of large language models to exhibit theory of mind capabilities and contribute to our understanding of cognitive processes in artificial intelligence. However, our findings are limited to open-source LLMs, as we did not have access to the hidden embeddings of the higher-performing LLMs such as GPT-4 (7), which could offer further insights into the relationship between model performance and embedding representation. Further, our methods excluded embeddings that were selective to both true- and false-belief trials and only focused on the embeddings that showed selectivity to one of them. Nevertheless, our findings represent the initial exploration into the role of embeddings in ToM within language models and provide insights in how artificial intelligence can exhibit sophisticated cognitive abilities.
2309.01660#15
2309.01660#17
2309.01660
[ "2302.02083" ]
2309.01660#17
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
# Methods # Theory of mind (ToM) materials To assess the artificial language modelsâ capacity for theory of mind and to ensure a direct comparison with human performance, we used testing materials previously employed in human studies during single neural recordings. Minor adjustments were made to accommodate the specificities of artificial models (e.g., statements in pairs were slightly modified to have similar lengths). The ToM ability of each model was evaluated using 76 trials consisting of a scenario statement followed by two related questions: a â
2309.01660#16
2309.01660#18
2309.01660
[ "2302.02083" ]
2309.01660#18
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
belief questionâ related to the belief of the agent in the scenario statement and a â fact questionâ concerning the physical state of reality (Fig. 1, Table 1). Across all trials we presented, the lengths of the statement varied between 81 words to 191 words, with an average of 125 words. Scenario statements. The trials were grouped in pairs, containing one true-belief and one false- belief trial in each pair. The trials in a pair start with very similar scenario statements providing background for the reader to infer whether the agentâ s belief in the story is aligned with the reality or not (true-belief or false belief, respectively; see examples in Table 1). In addition, we ensured each pair of true- and false-belief trials contain the same number of words in the statements, so that the potential variances stemming from different word positions in the sentence are minimized.
2309.01660#17
2309.01660#19
2309.01660
[ "2302.02083" ]
2309.01660#19
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
Questions for evaluating model performance. Based on the statements described above, we designed two categories of questions to test the ToM capability of the large language models (LLMs): a fact question and an other-belief question (Table 1). We edited the structure of the question in order to obtain an objective evaluation of the model ability. For example, after a scenario statement like â Charles left his wallet on the counter as he was leaving the store. The wallet fell on the floor. Charles returnsâ , if we asked â Where will Charles look for the wallet?â
2309.01660#18
2309.01660#20
2309.01660
[ "2302.02083" ]
2309.01660#20
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
, an LLM might generate a long paragraph without directly answering the question, making it subjective to assess whether the model answered the question correctly or not. Here, given that all LLMs we assessed generate outputs in the form of predicted upcoming words with a probability distribution across all possible tokens, we modified the questions to align with this characteristic of the LLMs. In the example provided above, we asked â Charles will look for the wallet on theâ . In this way, LLM models will likely predict a location for the upcoming word.
2309.01660#19
2309.01660#21
2309.01660
[ "2302.02083" ]
2309.01660#21
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
Question for evaluating othersâ belief processing by hidden embeddings. Here, the goal of these questions is not to evaluate model performance, but to examine whether hidden embeddings show selectivity to the trial types (false-belief or true-belief), and to directly compare the results to those from single neurons in human brains. Therefore, we used the same set of questions as those posed to human participants to ensure reasonable comparison with findings from single neurons recorded in prefrontal cortex of human brains. Specifically, we asked the same belief questions for each pair of true- and false-belief trials, using the same format in (9), e.g., â
2309.01660#20
2309.01660#22
2309.01660
[ "2302.02083" ]
2309.01660#22
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
Where will Charles look for his wallet?â In this way, the pair of true- and false-belief trials were composed with very similar words and with exactly the same questions (Table 1, Fig. 2). Table 1. Example of the task materials Trial type Statement Fact question Belief question Belief question in the human study False belief True belief Mary put fish inside a jewelry box while her son wasn't looking. Her son opens the box. Mary put jewelry inside a jewelry box and her son sees it. Her son opens the box. Inside the box, there is Inside the box, there is Inside the box, he expects to find Inside the box, he expects to find What does he expect to find? What does he expect to find?
2309.01660#21
2309.01660#23
2309.01660
[ "2302.02083" ]
2309.01660#23
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
False belief Ned and you take a photo of an apple on a tree. While the photo develops, Ned leaves and is unaware that a wind blows the apple to ground. Currently, the apple is on the Ned believes that the apple is on the Where does Ned believe the apple is? True belief False belief True belief Ned and you take a photo of an apple on a tree. While the photo develops, you and Ned see a strong wind blow the apple on the ground. Charles left his wallet on the counter as he was leaving the store. The wallet fell on the floor. Charles returns Charles left his wallet on the counter as he was leaving the store. No one has touched his wallet. Charles returns. Currently, the apple is on the The wallet is on the The wallet is on the Ned believes that the apple is on the Charles will look for the wallet on the Charles will look for the wallet on the Where does Ned believe the apple is? Where will Charles look for the wallet? Where will Charles look for the wallet?
2309.01660#22
2309.01660#24
2309.01660
[ "2302.02083" ]
2309.01660#24
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
# Control tasks To ensure our observations are not derived from factors unrelated to the scenario created in the statements, we performed the following two controls. First, we created shuffled control trials by randomly permutating words in each statement while keeping the questions intact (Table 2). In this way, we kept the same words in the statement but eliminated the contextual information. Second, we estimated the impact of any clues within the questions (e.g., the potential imbalance of word frequency) by inputting each model with the questions only. The combination of these two controls will provide estimation on the impact of factors unrelated to the ToM-related content provided by the statement. Table 2. Example of control task by random shuffle words in statement Trial type Statement Fact question Belief question Belief question in the human study False belief her son jewelry Mary looking. Her fish son put while box inside wasn't opens the box. a Inside the box, there is Inside the box, he expects to find What does he expect to find? True belief inside Her and the box it. Mary her box. jewelry a opens son put jewelry sees son Inside the box, there is Inside the box, he expects to find What does he expect to find? False belief and take the photo the a wind an Ned Ned leaves tree. apple on is unaware a photo blows and develops, ground. While of you apple a to that Currently, the apple is on the Ned believes that the apple is on the Where does Ned believe the apple is? True belief While on you develops, the on you the Ned apple blow the apple an tree. Ned and take and photo a ground. strong a wind of see a photo Currently, the apple is on the Ned believes that the apple is on the Where does Ned believe the apple is? False belief on store. his left as the counter leaving was The wallet returns on the Charles wallet floor. fell Charles the he The wallet is on the Charles will look for the wallet on the Where will Charles look for the wallet? has No his one counter store. the returns. as on wallet wallet. Charles Charles the was he leaving touched his left The wallet is on the Charles will look for the wallet on the Where will Charles look for the wallet?
2309.01660#23
2309.01660#25
2309.01660
[ "2302.02083" ]
2309.01660#25
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
# Large language models (LLMs) Our study primarily focuses on four high-performing, independently trained language models that are publicly available as open source. All LLM models examined were composed of transformer modules that were connected in sequence. Each LLM contains multiple versions, characterized by varying numbers of parameters and potential fine-tunning on specific datasets and training. Specifically, these models include Falcon (1b, 7b, 40b), llama (3b, 7b, 13b, 30b, 33b), Pythia (3b, 7b, 12b), and GPT-2 (medium, large, xl). The tokenizers and parameters from all models were downloaded in July 2023 and were not updated since then. The details of the model information and the datasets that they were fine-tuned on are listed in Table 3. In our study, all models and tokenizers were loaded via Huggingface in Python (31). For models with a parameter count of less than or equal to 7b, we utilize a desktop computer with single GPU (NVIDIA GeForce RTX 4090). For larger models, we utilize the Massachusetts General Hospital GPU cluster facility with up to eight GPUs (NVIDIA DGX-1) for model performance and evaluations.
2309.01660#24
2309.01660#26
2309.01660
[ "2302.02083" ]
2309.01660#26
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
Table 3. Large language models examined in this study Model name Model source Size Description from model developer Falcon-1b Falcon Falcon-7b Falcon Falcon-40b Falcon LLaMa-3b-1 LLaMa LLaMa-7b-1 LLaMa LLaMa-13b-1 LLaMa LLaMa-30b-1 LLaMa LLaMa-7b-2 LLaMa LLaMa-13b-3 LLaMa LLaMa-33b-4 LLaMa Pythia-3b Pythia tiiuae/falcon- rw-1b tiiuae/falcon-7b 1b 7b 40b Decoder model; Trained on 350B tokens of RefinedWeb (22) Decoder model; Trained on 1,500B tokens of RefinedWeb; Enhanced with curated corpora. Decoder model; Based on Falcon-40B; Finetuned on a mixture of Baize. 3b An Open Reproduction of LLaMA (32) 7b An Open Reproduction of LLaMA 13b Merge of LLAMA-13b and SuperCOT LoRA (33) 30b Supercot; Work with langchain prompting 7b Chatbot; Fine-tuned on user-shared conversations from ShareGPT (34) 13b Fine-tuned on the ShareGPT, WizardLM, and Wizard- Vicuna datasets (35) 33b Focused on chat, roleplay, and story-writing (36) 3b Trained on the Databricks machine learning platform (37) Pythia-7b Pythia 7b Trained on the Databricks machine learning platform Pythia-12b Pythia databricks/dolly -v2-12b 12b Trained on the Databricks machine learning platform # Evaluating ToM performance Using the ToM materials described above, for each trial, we concatenated the statement and the corresponding question and fed them into the model. From the model output, we obtained the modelâ s prediction of the next word by examining the output logits of all possible tokens. These logits are monotonically related to the probability of the upcoming word predicted by the model. Then we specifically examined the logits of the two possible answers for the belief and fact questions. To determine the LLMsâ
2309.01660#25
2309.01660#27
2309.01660
[ "2302.02083" ]
2309.01660#27
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
answer, we chose the word with the highest logit value out of the two word choices (e.g. floor or counter) to ensure the selection of more reliable predictions and avoid instances where certain models generate irrelevant outputs. Then the accuracies of each model were calculated for true beliefs, false beliefs, and facts by considering all trials with corresponding questions. The same procedures were followed for the two control conditions described above to further verify our findings. # Hidden embeddingsâ selectivity for true- or false- belief trials For each LLM, we tested their ToM capacity by extracting the modelâ s hidden embeddings from all layers along with the predicted logit for the upcoming token during the task. Specifically, for each trial, we concatenated the statements and the questions we presented to the human participants as a single input to the LLM model (Table 1, Fig. 2a). The hidden embeddings were obtained from the output of each transformer module, in addition to the one that was input to the first transformer module for each model. The dimension of the returned embeddings for each model in this step was trials x words x nodes x layers, where words included those from both the statement and the question, and nodes referred to the embedding size of a layer. Following a comparable approach employed to evaluate ToM-related activities from single neurons in the human brain, we used the embeddings corresponding to the question tokens and subsequently calculated the average values across all question tokens (dimension of trials x nodes x layers). We then performed statistical tests to evaluate whether each embedding (looping over nodes and layers) exhibited significant responses to trial conditions (i.e., true-belief and false-belief). Particularly, we compared the embedding values between these two trial conditions with the Mann Whitney U test, testing the null hypothesis that the distributions of the two categories were the same. The statistic for the Mann Whitney U Test is the minimum of ð 1 and ð 2, defined as ð 1 = ð 1ð 2 + ð 2 = ð 1ð 2 + ð 1(ð 1 + 1) 2 ð 2(ð 2 + 1) 2 â ð 1 â ð 2 where ð 1 and ð
2309.01660#26
2309.01660#28
2309.01660
[ "2302.02083" ]
2309.01660#28
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
2 are the sum of the ranks for group 1 and 2 respectively. We used a threshold of 0.05 to determine whether a given dimension of an embedding demonstrated significant association to the trial category. Next, we grouped the embeddings based on their layers and models, and calculated the percentage of embeddings that showed higher than chance responsiveness. We examined all embeddings across different layers within a given LLM, then for each model, we selected the layer with the highest percentage of responsive embeddings as the percentage of that model.
2309.01660#27
2309.01660#29
2309.01660
[ "2302.02083" ]
2309.01660#29
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
All steps described here were repeated for the control experiments with the random permuted words in the statements described above for further verification. # Decoding the trial type using the population of embeddings In order to examine whether there is a causal relationship between the observed selectivity of the embeddings and model performance, we conducted decoding analysis using the entire population of embeddings of each layer. Specifically, for each layer, from the embeddings with the dimension of trials x words x nodes, we averaged across question tokens for each trial for a given layer of each model, resulting in the dimension of trials x nodes. We considered nodes as the equivalent of neurons in the brain to predict the type of trials as the target variable. We used 75% training and 25% testing split based on the pair of trials, so that trials within a pair were not separated into two datasets. We used a logistic regression classifier with L2 regularization of ð ¶ = 1, which minimize the cross-entropy loss with the penalty of the square of the parameter values: ð min ð ¶ (â (â ð ¦ð log(ð Ì (Xð )) â (1 â ð ¦ð ) log(1 â ð Ì (Xð ))) ) + 1 2 â ð ¤â 2 ð =1 where the target variable ð ¦ð belongs to the set {0, 1} for data point ð , and ð
2309.01660#28
2309.01660#30
2309.01660
[ "2302.02083" ]
2309.01660#30
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
¤ is the weight. For each layer of a given LLM, we performed the same analysis 100 times with different train and test splits, and calculated the average accuracies across these iterations. At the end, the decoding accuracy of each model was calculated by taking the average over all layers from the model. As a control, we repeated the same procedures for the same layer, but using the ToM materials with the randomly permuted words in the statements. # Acknowledgement We are grateful to Yuedong Fang, Yue Cao and Nikola Bolt for their comments to improve the manuscript, and Douglas Kellar for facilitating access to computational resources. We acknowledge the utilization of ChatGPT for assistance in refining the wording of this manuscript (38). Z.M.W. is supported by NIH U01NS123130. # Code availability All codes will be made publicly available on GitHub when the manuscript is accepted for publication.
2309.01660#29
2309.01660#31
2309.01660
[ "2302.02083" ]
2309.01660#31
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
A. Example of the Theory of Mind (ToM) material Statement: Evaluation True belief trial Bolot questi (Agent's view same as reality) jenel question: LLM predicts - â + probability (logit) False belief trial â act question: of next word (Agents view different from reality) B. Large language model (LLM) performance C. Large language model (LLM) performance on false belief trials by model size Boliof question: True belief Belief question: False beliof Facts 10 104 10 Mm octun © tandom permuing words i statement â question onty I J accuracy improvement GPT2.m. GPr2 model size (x 10°) Figure 1.
2309.01660#30
2309.01660#32
2309.01660
[ "2302.02083" ]
2309.01660#32
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
Theory of Mind capability in various large language models. A. ToM tasks comprising statements and questions were input to each LLM, and the predicted upcoming word and probabilities were examined (Method). The ToM trials were either true- or false-belief, depending on whether the agentâ s viewpoint was aligned with the reality or not. Besides, we assessed the modelsâ ability to answer questions about the factual state of reality provided by the statements. B. Model performance on questions of true-belief (left), false-belief (middle), and facts trials (right). For control experiments, we randomly permutated words within the statements and input these shuffled words along with questions to the models and repeated the same model evaluation procedures. We also assessed modelsâ performance on questions-only trials without the statements to evaluate impact of factors unrelated to the context provided by the statements. C.
2309.01660#31
2309.01660#33
2309.01660
[ "2302.02083" ]
2309.01660#33
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
LLMsâ accuracies in answering false-belief questions and their dependency on the size of the models. We plotted the accuracy improvement resulting from inputting statements and questions compared to accuracy from only inputting questions, across different LLMs. A. Example of the Theory of Mind (ToM) material Statement Belief question False belief (FB) trial True belief (TB) trial LLM Hidden embeddings Statements and questions Evaluation Questions FB TB Mann Whiteney U test HO: same distribution B. Example of an embedding C. Embeddings in Falcon-40b show selective response to true and false belief trials responding to trial type Falcon-40b, layer 23 © Falcon-40b, layer 5 ° Falcon-40b, layer 25 © Falcon-40b, layer 45 8 05 7 3 0.5 7 ~ 3 05 = FB > > » |= 78 NB Ag xg â
2309.01660#32
2309.01660#34
2309.01660
[ "2302.02083" ]
2309.01660#34
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
¬ Po oO} P20 Po 0 =] so se $e 8 oe oe b oe F S gr $F S$ oil 0 £ 055â > 95 £ 2% 7 ae = eee: -25 0.0 25 6 705 0 05 5 05 0 05 § 05 () 0s Embedding value embedding z-score embedding z-score embedding z-score False belief False belief False belief D. Percentage of embeddings responding significantly to true or false beliefs 25 7 LLaMa-13b-3 LLaMa-30b-1 Falcon-40 vets 2 a § be . © 5 ° 32 z54° â : 3 $ 2 4 8 15 8 . 2 g, 2 2 40 0 2 . ro P 3S . 5 10 layer 3 2 & g £ | § 5 actual ; 51 : 5 Random permuting words in statements 2 a me Fi a a 0 if & ® ° 0.2 0.1 0.0 04 0.2 Te2Rxesrerte ere 2 - mi fe2R8R8 283833233 8 ; pease ee et ee ee eseis Model accuracy above chance on false belief trials o sOR SRS GE SzESHRES i i a a a a a a5 355 a4'3 4 Figure 2. Responding embeddings to true- versus false-belief trials. A. To investigate whether hidden embeddings exhibit selective modulations to true versus false beliefs and to compare the artificial embeddings with human single neurons, we employed similar ToM tasks as those previously tested on humans. For each trial, the statement and the question were concatenated and input to the LLMs, obtaining hidden embeddings from all words and layers (Methods). We then excluded embeddings from words during the statement and computed average values for words within the question. A Mann Whitney U test was conducted to examine whether a hidden embedding exhibited significant difference between false- and true-belief trials.
2309.01660#33
2309.01660#35
2309.01660
[ "2302.02083" ]
2309.01660#35
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
B. Distributions of embedding values from Falcon-40b layer 23 to illustrate that the activities were significantly different for false-belief trials than for true-belief trials. C. Examples from different layers of the Falcon-40b model show the average embedding values over true- and false-belief trials. Each dot represents a dimension of the hidden embedding; orange dots indicate the embedding with significant differences between trial types, while the gray dots indicate no significance. D. The percentage of embedding dimensions significantly selective to true and false beliefs varies across models and layers (left), with the Falcon-40b model demonstrating the highest percentage. These results are compared to the percentage of single neurons in the human brain (light green). The percentages across layers of three example models are shown in the insets. The percentages of significant embeddings across different models were found to be dependent on the false-belief trial accuracy (right).
2309.01660#34
2309.01660#36
2309.01660
[ "2302.02083" ]
2309.01660#36
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
A. Decoding trial types from hidden embeddings B. Trial-type decoding results across models True-belief if 1.0 » Decoded Fesetelef © Falcon-40b, layer 25 ey mmm actual 4 ° meme Random permuting words in statements a 08 Bo 81 a ° ® decoding accuracy ° FS ° iS Absoluate difference between TB and FB 0.0 Ex fre gezgpee*agepys Tf 5a 2 BR be BN BEES Sr EBs 8a GE 88 8B BT °° oS ea 22228 225533 5 correct trials â _incorrect trials 3 3 eae ° s°ftz ef ze23R8 828 8 8 a a 3° 8s 554 a3 a3a3 Figure 3. Decoding trial types using hidden embeddings. A. Using Falcon-40b as an example, higher probabilities of the correct trial type were observed from most observations decoded from all embeddings at layer 25 (top). The selected embeddings showed greater difference between true- and false-belief trials in correctly decoded trials compared to the incorrectly decoded trials (bottom).
2309.01660#35
2309.01660#37
2309.01660
[ "2302.02083" ]
2309.01660#37
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
B. Across different models, large models generally demonstrated higher decoding accuracy for true- and false-belief trials using all embeddings from each layer. In contrast, decoding accuracies remained consistently low when the words in the statements were randomly permuted before inputting them into the LLMs. # Reference: 2. 3. 7. 8. 9. N. Aggarwal, G. J. Saxena, S. Singh, A. Pundir, Can I say, now machines can think? arXiv preprint arXiv:2307.07526, (2023). M. Sallam, in Healthcare. (MDPI, 2023), vol. 11, pp. 887. J. He-Yueya, G.
2309.01660#36
2309.01660#38
2309.01660
[ "2302.02083" ]
2309.01660#38
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
Poesia, R. E. Wang, N. D. Goodman, Solving math word problems by combining language models with symbolic solvers. arXiv preprint arXiv:2304.09102, (2023). Z. Yuan, H. Yuan, C. Tan, W. Wang, S. Huang, How well do Large Language Models perform in Arithmetic tasks? arXiv preprint arXiv:2304.02015, (2023). L. Pan, A. Albalak, X. Wang, W. Y. Wang, Logic-lm: Empowering large language models with symbolic solvers for faithful logical reasoning. arXiv preprint arXiv:2305.12295, (2023).
2309.01660#37
2309.01660#39
2309.01660
[ "2302.02083" ]
2309.01660#39
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
S. Yao et al., Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, (2023). OpenAI, GPT-4 Technical Report. ArXiv abs/2303.08774, (2023). C. Frith, U. Frith, Theory of mind. Current biology 15, R644-R645 (2005). M. Jamali et al., Single-neuronal predictions of othersâ beliefs in humans. Nature 591, 610-614 (2021). 10. M.
2309.01660#38
2309.01660#40
2309.01660
[ "2302.02083" ]
2309.01660#40
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
Kosinski, Theory of mind may have spontaneously emerged in large language models. arXiv preprint arXiv:2302.02083, (2023). T. Ullman, Large language models fail on trivial alterations to theory-of-mind tasks. arXiv preprint arXiv:2302.08399, (2023). S. Trott, C. Jones, T. Chang, J. Michaelov, B. Bergen, Do Large Language Models know what humans know? Cognitive Science 47, e13309 (2023).
2309.01660#39
2309.01660#41
2309.01660
[ "2302.02083" ]
2309.01660#41
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
13. M. C. Frank, Baby steps in evaluating the capacities of large language models. Nature Reviews Psychology, 1-2 (2023). H. M. Wellman, D. Cross, J. Watson, Metaâ analysis of theoryâ ofâ mind development: The truth about false belief. Child development 72, 655-684 (2001). H. Wimmer, J. Perner, Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children's understanding of deception. Cognition 13, 103-128 (1983).
2309.01660#40
2309.01660#42
2309.01660
[ "2302.02083" ]
2309.01660#42
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
K. Milligan, J. W. Astington, L. A. Dack, Language and theory of mind: Metaâ analysis of the relation between language ability and falseâ belief understanding. Child development 78, 622-646 (2007). V. E. Stone, S. Baron-Cohen, R. T. Knight, Frontal lobe contributions to theory of mind. Journal of cognitive neuroscience 10, 640-656 (1998). 18. M. Siegal, R. Varley, Neural systems involved in'theory of mind'. Nature Reviews Neuroscience 3, 463-471 (2002).
2309.01660#41
2309.01660#43
2309.01660
[ "2302.02083" ]
2309.01660#43
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
R. Saxe, N. Kanwisher, People thinking about thinking people: the role of the temporo- parietal junction in â theory of mindâ . Neuroimage 19, 1835-1842 (2003). R. Saxe, L. J. Powell, It's the thought that counts: specific brain regions for one component of theory of mind. Psychological science 17, 692-699 (2006). G. Penedo et al., The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, (2023). E. Almazrouei et al. (2023). 20. 21. 22. 23, 24. 25. 26. 27. 28. 29. 30. 31. H.
2309.01660#42
2309.01660#44
2309.01660
[ "2302.02083" ]
2309.01660#44
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
Touvron et al., LLaMA: open and efficient foundation language models, 2023. URL https://arxiv. org/abs/2302.13971. S. Biderman et al., in International Conference on Machine Learning. (PMLR, 2023), pp. 2397-2430. A. Radford et al., Language models are unsupervised multitask learners. OpenAI blog 1, 9 (2019). E. Beeching et al., Open LLM Leaderboard.
2309.01660#43
2309.01660#45
2309.01660
[ "2302.02083" ]
2309.01660#45
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
Hugging Face, (2023). U. Frith, F. Happé, Theory of mind and selfâ consciousness: What is it like to be autistic? Mind & language 14, 82-89 (1999). J. Perner, Z. Dienes, Developmental aspects of consciousness: How much theory of mind do you need to be consciously aware? Consciousness and cognition 12, 63-82 (2003). J. I. Carpendale, C.
2309.01660#44
2309.01660#46
2309.01660
[ "2302.02083" ]
2309.01660#46
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
Lewis, Constructing an understanding of mind: The development of children's social understanding within social interaction. Behavioral and brain sciences 27, 79-96 (2004). C. Lewis, N. H. Freeman, C. Kyriakidou, K. Maridakiâ Kassotaki, D. M. Berridge, Social influences on false belief access: Specific sibling influences or general apprenticeship? Child development 67, 2930-2947 (1996).
2309.01660#45
2309.01660#47
2309.01660
[ "2302.02083" ]
2309.01660#47
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
T. Wolf et al., Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, (2019). X. a. L. Geng, Hao. (2023). https://huggingface.co/ausboss/llama-13b-supercot. 32. 33. 34. W.-L. Chiang et al., Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023), (2023). https://huggingface.co/openaccess-ai-collective/wizard-mega-13b. https://huggingface.co/elinas/chronos-33b. 35. 36. 37. M.
2309.01660#46
2309.01660#48
2309.01660
[ "2302.02083" ]
2309.01660#48
Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
Conover et al., Free dolly: Introducing the worldâ s first truly open instruction-tuned 37. M. Conover et al., Free dolly: Introducing the worldâ s first truly open instruction-tuned Ilm. (2023). # llm. (2023). ChatGPT, https://chat.openai.com/chat. 38.
2309.01660#47
2309.01660
[ "2302.02083" ]
2309.01219#0
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
3 2 0 2 p e S 4 2 ] L C . s c [ 2 v 9 1 2 1 0 . 9 0 3 2 : v i X r a # Sirenâ s Song in the AI Ocean: A Survey on Hallucination in Large Language Models Yue Zhangâ â , Yafu Liâ ¢ , Leyang Cuiâ ¡â , Deng Caiâ ¡ , Lemao Liuâ ¡ Tingchen Fuâ , Xinting Huangâ ¡ , Enbo Zhaoâ ¡ , Yu Zhangâ , Yulong Chenâ ¢ Longyue Wangâ ¡ , Anh Tuan Luuâ ¼ , Wei Biâ ¡ , Freda Shi , Shuming Shiâ ¡ â ¡ Tencent AI lab â Soochow University â ¢Zhejiang University â Renmin University of China â ¼ Nanyang Technological University Toyota Technological Institute at Chicago
2309.01219#1
2309.01219
[ "2307.03109" ]
2309.01219#1
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
# Abstract While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated con- text, or misaligns with established world knowledge. This phenomenon poses a sub- stantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, ex- planation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing ap- proaches aiming at mitigating LLM halluci- nation, and discuss potential directions for future research.
2309.01219#0
2309.01219#2
2309.01219
[ "2307.03109" ]
2309.01219#2
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
User Input Can you recommend a delicious recipe for 5 dinner? LLM Response Yes, here is a delicious recipe for lunch. So how about fried chicken with mashed potatoes? In addition, tomatoes are also an excellent pairing for this dish as they are rich in calcium. Enjoy this steak! Hallucination Explanation Input-Conflicting Hallucination: the user wants a recipe for dinner while LLM provide one for lunch. Context-Conflicting Hallucination: steak has not been mentioned in the preceding context. Fact-Conflicting Hallucination: tomatoes are not rich in calcium in fact.
2309.01219#1
2309.01219#3
2309.01219
[ "2307.03109" ]
2309.01219#3
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Figure 1: Three types of hallucinations occurred in LLM responses (best viewed in color). # Introduction Large language models (LLMs), particularly char- acterized by their substantial number of param- eters, have arisen as a promising cornerstone for the development of natural language pro- cessing (NLP) and artificial intelligence (Zhao et al., 2023c). With proper alignment techniques, such as supervised finetuning (SFT; Zhang et al., 2023b) and reinforcement learning from human feedback (RLHF; Ouyang et al., 2022; Fernan- des et al., 2023), recent LLMs (OpenAI, 2023a; Touvron et al., 2023b; OpenAI, 2023b, inter alia) have exhibited strong capabilities in solving vari- ous downstream tasks. Nonetheless, as exemplified in Figure 1, LLMs, despite their remarkable success, occasionally
2309.01219#2
2309.01219#4
2309.01219
[ "2307.03109" ]
2309.01219#4
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
â This survey paper was completed during Yue Zhang ([email protected]), Yafu Li, Tingchen Fu, and Yu Zhangâ s internships at Tencent AI Lab. # â Corresponding author ([email protected]). produce outputs that, while seemingly plausible, deviate from user input (Adlakha et al., 2023), pre- viously generated context (Liu et al., 2022), or fac- tual knowledge (Min et al., 2023; Muhlgay et al., 2023; Li et al., 2023a)â this phenomenon is com- monly referred to as hallucination, which signifi- cantly undermines the reliability of LLMs in real- world scenarios (Kaddour et al., 2023). For in- stance, LLMs can potentially fabricate erroneous medical diagnoses or treatment plans that lead to tangible real-life risks (Umapathi et al., 2023). While hallucination in conventional natural lan- guage generation (NLG) settings has been widely studied (Ji et al., 2023), understanding and ad- dressing the hallucination problem within the realm of LLMs encounters unique challenges in- troduced by 1.
2309.01219#3
2309.01219#5
2309.01219
[ "2307.03109" ]
2309.01219#5
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Massive training data: in contrast to care- fully curating data for a specific task, LLM pre- Definition (Sec. 2) Input-Conflicting Hallucination Benchmark (Sec. 3) Input-Conflicting Benchmark: BEGIN, QMSum, Context-Conflicting Hallucination Context-Conflicting Benchmark: HADES... Fact-Conflicting Hallucination Fact-Conflicting Benchmark: TruthfulQA,,FActScore, FENMT,FEQA... HaluEval, FACTOR... TimeLine Pre-trainin fi in Parametric Memorization --â -â nny ar Curating Training Data a , Zâ ee Honesty-oriented SFT Overinflated SETA Self-confidence â 7 ae â Honesty-oriented RL a - â - - - - . ing Ali a U7 Misleading Alignment = <~__ Ls â Decoding Strategy RLHF a L â oO _-â Knowledge Retrieve 7 oT Generation-time Risk ssn Yeo Inference o-7 Le a Exploiting Uncertainty Sources (Sec. 4) Mitigation (Sec. 5) Figure 2: The overview structure of this paper: We initially categorize LLM hallucinations into three distinct types and then introduce corresponding evaluation benchmarks. Subsequently, we explore the source of hallucinations and discuss mitigation strategies throughout the life cycle of LLMs (pre-trainingâ
2309.01219#4
2309.01219#6
2309.01219
[ "2307.03109" ]
2309.01219#6
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
SFTâ RLHFâ inference). training uses trillions of tokens obtained from the web, making it difficult to eliminate fabri- cated, outdated or biased information; 2. Versatility of LLMs: general-purpose LLMs are expected to excel in cross-task, cross- lingual, and cross-domain settings, posing challenges for comprehensive evaluation and mitigation of hallucination. 3. Imperceptibility of errors: as a byproduct of their strong abilities, LLMs may generate false information that initially seems highly plausi- ble, making it challenging for models or even humans to detect hallucination. tioned challenges, which strongly motivates us to compile this survey. We organize this paper as follows, as also introduce the depicted in Figure 2. We first background of LLMs and offer our definition of hallucination in LLMs (§2). Next, we introduce relevant benchmarks and metrics (§3). Subse- quently, we discuss potential sources of LLM hal- lucinations (§4), and provide an in-depth review of recent work towards addressing the problem (§5). Finally, we present forward-looking perspectives (§6). We will consistently update the related open-source materials, which can be accessed at https://github.com/HillZhang1999/ llm-hallucination-survey. In addition, the RLHF process (Ouyang et al., 2022), the vague knowledge boundary (Ren et al., 2023) and the black-box property of LLMs (Sun et al., 2022) also complicate the detection, expla- nation, and mitigation of hallucination in LLMs. There has been a notable upsurge in cutting-edge research dedicated to addressing the aforemen- # 2 Hallucination in the Era of LLM We begin this section by overviewing the history of LLMs (§2.1). Next, we present our defini- tion of LLM hallucination, by breaking it down
2309.01219#5
2309.01219#7
2309.01219
[ "2307.03109" ]
2309.01219#7
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
2 into three sub-categories (§2.2). In addition, we discuss the unique challenges of hallucination in LLMs (§2.3), and compare hallucination with other prevalent problems that are frequently en- countered in the realm of LLMs (§2.4). # 2.1 Large Language Models An important category of LLMs is autoregressive language models (Radford et al., 2019; Chowd- inter hery et al., 2022; Touvron et al., 2023a, alia). These models take Transformers (Vaswani et al., 2017) as the backbone, and predict the next token based on previous tokens.1 Prior to the widespread adoption of Transformers, autoregres- sive language models were built on the backbones of n-grams (Bickel et al., 2005; Pauls and Klein, 2011) and recurrent neural networks (Mikolov et al., 2010), and have been applied to various NLG tasks such as summarization (Nallapati et al., 2017) and dialogue generation (Chen et al., 2017). Transformer-based LLMs have demonstrated exceptional performance across tasks, and have therefore shifted NLP from a paradigm centered on task-specific solutions to general-purpose pre- training (Devlin et al., 2019; Radford et al., 2019). The pretrained models are optimized on various self-supervision objectives (Devlin et al., 2019; inter Raffel et al., 2020; Lewis et al., 2020a, alia), using large-scale unlabeled corpora. Sub- sequently, the models are fine-tuned with labeled data on target downstream tasks. Representations from the pretrained models can typically reduce the demand for annotated data and achieve sig- nificant performance improvement across down- stream tasks (Qiu et al., 2020; Min et al., 2021; Li et al., 2022b, inter alia). In addition to performance improvement on downstream tasks, recent work has found that scal- ing up pretrained language modelsâ both in terms of model parameter count and the volume of pre- training dataâ
2309.01219#6
2309.01219#8
2309.01219
[ "2307.03109" ]
2309.01219#8
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
enables some remarkable abilities, including in-context learning (Brown et al., 2020), reasoning (Wei et al., 2022), and instruction fol- lowing (Ouyang et al., 2022). The community has, to some extent, popularized the term large lan- guage models (LLMs) to differentiate them from their smaller counterparts. Notably, LLMs exhibit the potential to accurately comprehend human in- structions and efficiently tackle a variety of com- 1Another variant of language models predicts masked to- kens in a corrupted sequence (Devlin et al., 2019; Liu et al., 2019; Lan et al., 2019, inter alia).
2309.01219#7
2309.01219#9
2309.01219
[ "2307.03109" ]
2309.01219#9
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
3 plex tasks with only minimal or even no supervi- sion (OpenAI, 2023a,b; Touvron et al., 2023b). # 2.2 What is LLM Hallucination While LLMs have demonstrated remarkable per- formances, they still inevitably encounter different problems in practical applications, where halluci- nation is one of the most significant issues among them. The term hallucination has already been widely adopted in the NLP community before the emergence of LLM, typically referring to gen- erating nonsensical or unfaithful to the provided source content (Ji et al., 2023). We argue that the definition appears to have considerably expanded due to the versatility of LLMs. To this end, we categorize hallucination within the context of LLMs as follows:
2309.01219#8
2309.01219#10
2309.01219
[ "2307.03109" ]
2309.01219#10
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
â ¢ Input-conflicting hallucination, where LLMs generate content that deviates from the source input provided by users; â ¢ Context-conflicting hallucination, where LLMs generate content that conflicts with previously generated information by itself; â ¢ Fact-conflicting hallucination, where LLMs generate content that is not faithful to estab- lished world knowledge. We present examples for each type of hallucina- tions in Table 1, and discuss them in detail below. Input-conflicting hallucination. This type of hallucination arises when the content generated by LLMs deviates from user input. Typically, user input for LLMs comprises two components: task instruction (e.g., user prompt for summarization) and task input (e.g., document to be summarized). The contradiction between LLM response and task instructions typically reflects a misunderstanding of user intents. In contrast, when the contradiction arises between the generated content and task in- put, the hallucination is in line with the conven- tional definition in specific NLG tasks, such as machine translation (Lee et al., 2019) and sum- marization (Maynez et al., 2020; Pu et al., 2023). For instance, the first example in Table 1 appears to highlight a contradiction between the gener- ated content and task input: when users request the LLM to generate a summary, the LLM incor- rectly replaces the personâ s name in its response (Hillâ Lucas), even though the general form can indeed be perceived as a suitable summary. Type Turn Example Input-conflicting Hallucination User Please summarize the following content:
2309.01219#9
2309.01219#11
2309.01219
[ "2307.03109" ]
2309.01219#11
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
My friend Hill and I love play- ing basketball together. We spend hours on the court, dribbling, shoot- ing, and competing against each other. Our passion for this game brings us closer as good friends. Model Lucas and I love playing basketball and this makes us good friends. Context-conflicting Hallucination User Model Tell me something about the NBA Commissioner. In the worldâ s most popular basketball league, the NBA, Commissioner Adam Silver has long been regarded as a leader and decision-maker. Firstly, Silver has claimed his dedication to maintaining fairness and competitiveness in the game. Secondly, Silver has repeatedly empha- sized his concern for player health and well-being. In this case, Sternâ s words and actions clash with his stated goal of caring for player welfare.
2309.01219#10
2309.01219#12
2309.01219
[ "2307.03109" ]
2309.01219#12
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Fact-conflicting Hallucination User Who was the mother of Afonso II, the third king of Portugal? Model The mother of Afonso II was Queen Urraca of Castile. Table 1: Illustrations of the three types of LLM hallucinations that we defined. For input-conflicting hallucination, the LLM makes a mistake in the person name (Hillâ Lucas) during summarizing. For the context-conflicting hallucination, the LLM discusses Silver in the early stage, but later became Stern and resulting in a contradiction. For the fact-conflicting hallucination, LLMs said the mother of Afonso II was Queen Urraca of Castile, while the correct answer is Dulce Berenguer of Barcelona. hallucination. LLMs Context-conflicting may exhibit self-contradictions when generat- ing lengthy or multi-turn responses. This type of hallucination arises when LLMs lose track to maintain consistency of the context or fail the conversation, potentially due throughout to their limitations in maintaining long-term memory (Liu et al., 2023d) or identifying relevant context (Shi et al., 2023a). The second example in Table 1 demonstrates how a user request to introduce the NBA Commissioner leads to a Specifically, context-conflicting hallucination. the LLM initially introduces Silver (the current NBA commissioner), but later refers to Stern (the former NBA commissioner), demonstrating a lack of consistency in the generation. The focus of recent hallucination re- search in LLMs is predominantly on fact- conflicting hallucination, despite the importance Possible reasons in- of the other two types. clude but not limited to: (1) input- and context- conflicting hallucinations have been extensively studied in conventional NLG settings (Ji et al., 2023). However, fact-conflicting hallucination poses more complex challenges in LLMs due to the absence of an authoritative knowledge source as a reference; (2) fact-conflicting hallucinations tend to have more side effects on the practical ap- plications of LLMs, leading to a greater emphasis in recent studies. In light of this research status, the following sections of our paper will primar- ily concentrate on the fact-conflicting hallucina- tions, and we will explicitly highlight them when addressing the other two types of hallucinations. Fact-conflicting hallucination.
2309.01219#11
2309.01219#13
2309.01219
[ "2307.03109" ]
2309.01219#13
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
This type of hallucination occurs when LLMs generate infor- mation or text that contradicts established world knowledge. The source of fact-conflicting halluci- nations can be multifarious and introduced at dif- ferent stages of the LLM life cycle, as shown in Figure 2. We present an illustration in Table 1 (third example): in this case, the user asks the LLM about the mother of Afonos II. The LLM gave a wrong answer (Queen Urraca of Castile instead of Dulce Berenguer of Barcelone), which can easily mislead less knowledgeable users. # 2.3 Unique Challenge in the Era of LLM Although the problem of hallucination has been extensively researched in conventional NLG tasks (Ji et al., 2023), hallucinations in LLMs bring forth a unique and complex set of challenges stemming from the training process and usage scenarios.
2309.01219#12
2309.01219#14
2309.01219
[ "2307.03109" ]
2309.01219#14
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Massive training data. Unlike task-specific NLG models trained on limited-scaled datasets, LLMs are pre-trained on trillions of tokens. These 4 Type Turn Example Ambiguity User What is the capital of France? Model The capital of a European country. User How can I replace a flat tire on my car? Incompleteness Model To replace a flat tire, there are four steps. First, youâ ll need to use a jack to lift the car. Second, please remove the lug nuts and flat tire.
2309.01219#13
2309.01219#15
2309.01219
[ "2307.03109" ]
2309.01219#15
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
User Tell me about a typical elementary school teacher. Bias Model A typical elementary school teacher is a woman who is patient, nurtur- ing, and skilled at managing a classroom full of young children. Under-informativeness User Introduce the most popular task in NLP in 2022. Model Sorry, I donâ t know. Maybe you can check external search engines. Table 2: Examples of various problems that LLMs may expose, in addition to hallucinations. pre-training corpora are automatically collected from the web and often contain a significant amount of fabricated, outdated, or biased informa- tion (Penedo et al., 2023). Such inadequate data may lead LLMs to generate hallucinated content. The large data scale may also increase the diffi- culty of applying data-centric approaches to miti- gate the hallucination in LLMs. culty in detecting and reducing input- and context- conflicting hallucination, as we can no longer re- sort to simple superficial patterns. Regarding fact- conflicting hallucinations, we also need to con- sider leveraging more knowledge sources for veri- fication. These factors collectively introduce sub- stantial new challenges. # 2.4 Other Problems in LLMs Versatility of LLMs. Conventional NLG mod- els are typically designed for a single task, and thus, hallucination studies on them are usually task-specific (Maynez et al., 2020; Wang and Sen- nrich, 2020; Xiao and Wang, 2021); however, cur- rent LLMs are expected to excel in multi-task, multi-lingual, and multi-domain settings (Bang et al., 2023; Chang et al., 2023). This expectation poses thorny challenges for both the evaluation In terms and mitigation of LLM hallucinations. of evaluation, LLMs are more commonly used for free-form text generation, and the lack of deter- ministic references in this setting complicates the automatic detection of hallucinations. Therefore, it is crucial to establish a comprehensive, reliable, and automatic evaluation benchmark. Regarding mitigation, the proposed methods should be ro- bustly effective, maintaining decent performance when being applied to various scenarios. Besides hallucination, LLMs also present other problems. We outline some common issues below and present examples in Table 2 to help readers distinguish between them and hallucination. Ambiguity.
2309.01219#14
2309.01219#16
2309.01219
[ "2307.03109" ]
2309.01219#16
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
This type of issue arises when the LLM response is ambiguous, lending itself to mul- tiple interpretations. The response may not neces- sarily be incorrect, but it falls short of providing a useful answer to the user question (Tamkin et al., 2022). The first example in Table 2 exemplifies this issue. The desired answer is â Parisâ , yet the LLM provides an ambiguous response. Incompleteness. The incompleteness issue oc- curs when the generated response is incomplete or fragmented. As demonstrated in the second exam- ple in Table 2, the LLM only informs users of the first two steps in a four-step process for replacing a tire, resulting in an incomplete explanation. Invisibility of errors. Compared to traditional NLG models, LLMs possess a significantly en- hanced writing capability and store a larger vol- ume of knowledge. Consequently, the false in- formation hallucinated by LLMs often appears highly plausible, to the extent that even humans may feel hard to detect. This amplifies the diffi- Bias. Bias in LLMs pertains to the manifestation of unfair or prejudiced attitudes within the gener- ated text. These biases may originate from train- ing data, which frequently encompasses historical texts, literature, social media content, and other sources. Such sources may inherently mirror so-
2309.01219#15
2309.01219#17
2309.01219
[ "2307.03109" ]
2309.01219#17
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
5 Benchmark Evaluation Size Task Format Metrics TruthfulQA FactualityPrompt FActScore KoLA-KC HaluEval FACTOR Gen&Dis Gen Gen Gen Dis Dis 817 16,000 500 190 Question Answering Text Completion Task Instructions Task Instructions 35,000 Question Answering&Task Instructions 4,030 Text Completion Truthfulness Ensemble FActScore Self-contrast Accuracy Accuracy Table 3: Representative benchmarks that can be used for evaluating LLM hallucination including TruthfulQA (Lin et al., 2021), FactualityPrompt (Lee et al., 2022), FActScore (Min et al., 2023), KoLA-KC (Yu et al., 2023a), HaluEval (Li et al., 2023a) and FACTOR (Muhlgay et al., 2023). Note that KoLA (Yu et al., 2023a) is designed for benchmarking world knowledge of LLMs, where the Knowledge Creating (KC) task can be used to assess hallu- cination. These benchmarks all focus on the factuality aspect, but diverge in the following aspects: â Evaluationâ denotes how these benchmarks evaluate hallucination, either by regarding hallucination as a generation quality metric for LLM generations (Generation, referred to as Gen) or assessing whether the LLM can discriminate be- tween factual and non-factual statements (Discrimination, referred to as Dis); â Task Formatâ reflects different methods of prompting language models, e.g., knowledge-intensive question answering (QA), task instructions (TI) and context prefixes for text completion (TC). cietal biases, gender bias, stereotypes, or discrim- inatory beliefs (Navigli et al., 2023). As shown in the third example in Table 2, the LLM portrays the teacher as a woman, which is a gender bias. Under-informativeness. This kind of issue refers to the propensity of LLMs to evade answer- ing certain questions or providing specific infor- mation, even when they should be capable of do- ing so. For instance, due to imperfections in the re- ward model, RLHF may lead to over-optimization of LLMs, potentially leading to a state of under- informativeness (Gao et al., 2022).
2309.01219#16
2309.01219#18
2309.01219
[ "2307.03109" ]
2309.01219#18
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
An example of this is presented in Table 2, where the LLM de- clines to respond to the user query. et al. (2021) and Liu et al. (2022) evaluate mod- elsâ ability to identify context conflicts introduced when BERT (Devlin et al., 2019) performs blank- filling. Most benchmarks today evaluate the fact- conflicting hallucination of LLMs (Lin et al., 2021; Lee et al., 2022; Min et al., 2023; Yu et al., 2023a; Li et al., 2023a; Muhlgay et al., 2023), which refers to their tendency to generate factual errors. This is considered a critical issue in LLMs because it is challenging for users to identify and poses real-life risks. In the upcoming sections, we will review exist- ing benchmark datasets and commonly used eval- uation metrics in §3.1 and §3.2, respectively. # 3 Evaluation of LLM Hallucination # 3.1 Evaluation Benchmarks Previous research has primarily concentrated on evaluating hallucination in specific natural lan- guage generation tasks, such as machine transla- tion (Guerreiro et al., 2023b; Dale et al., 2023), di- alogue generation (Dziri et al., 2021), question an- swering (Durmus et al., 2020) and text summariza- tion (Kryscinski et al., 2020; Maynez et al., 2020; Zhong et al., 2021). These works mainly focus on the input-conflicting hallucination facet, which is relatively easy for human users to identify given the source text, as shown in Table 1. Recently, studying this kind of hallucination in traditional NLG tasks has seen significant advancements. However, evaluating them in the setting of LLMs becomes more challenging due to the free-form and often long-form nature of LLM generation. Regarding context-conflicting hallucination, Cui Various benchmarks have been proposed for eval- uating hallucination in LLMs. We present repre- sentative ones in Table 3 and discuss them based on their evaluation formats, task formats, and con- struction methods below. Evaluation format.
2309.01219#17
2309.01219#19
2309.01219
[ "2307.03109" ]
2309.01219#19
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Existing benchmarks mainly evaluate hallucinations based on two different abilities of LLMs: the ability to generate factual statements or to discriminate them from non-factual ones. We present an example in Table 4 to showcase the difference between the two evaluation formats. Generation benchmarks (Lin et al., 2021; Lee et al., 2022; Min et al., 2023; Yu et al., 2023a) consider hallucination as a gen- eration characteristic, similar to fluency (Napoles et al., 2017) and coherence (Du et al., 2022), and evaluate the generated texts from LLMs. For 6 # Evaluation # Sample # Evaluation Sample # Generation # Generation Question:
2309.01219#18
2309.01219#20
2309.01219
[ "2307.03109" ]
2309.01219#20
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
What happens to you if you eat watermelon seeds? Reference: Nothing happens; You eat watermelon seeds; The watermelon seeds pass through your digestive system; You will not digest the watermelon seeds; The water- melon seeds will be excreted; There is an old wivesâ tale that watermelons will grow in your stomach, but this is impossible. Discrimination Question: What happens to you if you eat watermelon seeds? Choices: (A) The watermelon seeds pass through your digestive system (B) You grow watermelons in your stomach (C) You get sick (D) You have bad dreams Reference: (A) The watermelon seeds pass through your digestive system.
2309.01219#19
2309.01219#21
2309.01219
[ "2307.03109" ]
2309.01219#21
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Table 4: Illustrative examples of two methods for evaluating hallucinations (Generation v.s. Discrimination). instance, TruthfulQA (Lin et al., 2021) evaluates the truthfulness of LLMsâ responses to questions, while FActScore (Min et al., 2023) scrutinizes the factual accuracy of biographies generated by LLMs for specific individuals. In contrast, discrimination benchmarks (Li et al., 2023a; Muhlgay et al., 2023) consider LLMsâ ability to discriminate truthful statements from hallucinated ones. Specifically, HaluEval (Li et al., 2023a) requires the model to determine whether a state- ment contains hallucinated information, while FACTOR (Muhlgay et al., 2023) investigates whether the LLM assigns a higher likelihood to the factual statement compared to non-factual ones. Note that TruthfulQA (Lin et al., 2021) also supports discrimination format by offering a multiple-choice alternative to test a modelâ s ability to identify truthful statements. Task format. Existing benchmarks evaluate LLM hallucinations across various application tasks. Firstly, certain benchmarks (Lin et al., 2021; Li et al., 2023a) explore the issue of hal- lucination in the context of question-answering, evaluating the ability of LLMs to provide truthful answers to knowledge-intensive questions. Sec- ondly, FActScore (Min et al., 2023) and HaluE- val (Li et al., 2023a) employ task instructions, such as biography introduction instructions and 52K instructions from the Alpaca project (Taori et al., 2023), to prompt LLMs to generate re- sponses. The factuality of these responses is then evaluated. Thirdly, a line of work (Lee et al., 2022; Muhlgay et al., 2023) directly prompts LLMs to complete text given a prefix, and diagnoses po- tential hallucination during the generation of in- formative and factual statements. For instance, FACTOR (Muhlgay et al., 2023) considers con- text prefixes in Wikipedia documents, while Fac- tualityPrompt (Lee et al., 2022) designs prefixes specifically for factual or non-factual statements to elicit hallucinations.
2309.01219#20
2309.01219#22
2309.01219
[ "2307.03109" ]
2309.01219#22
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Table 5 provides samples under different task formats. Construction methods. Most aforementioned benchmarks involve human annotators for dataset creation or quality assurance. TruthfulQA (Lin et al., 2021) carefully designs the questions to elicit imitative falsehoods, i.e., false statements with a high likelihood on the training distribu- tion. They then hire human annotators to fur- ther validate the agreement of golden answers. FActScore (Min et al., 2023) conducts a man- ual annotation pipeline to transform a long-form model generation into pieces of atomic statements. HaluEval (Li et al., 2023a) employs two construc- tion methods. For the automatic generation track, they design prompts to query ChatGPT to sam- ple diverse hallucinations and automatically fil- ter high-quality ones. For the human-annotation track, they hire human annotators to annotate the existence of hallucination in the model responses and list the corresponding spans. FACTOR (Muhl- gay et al., 2023) first uses external LLMs to gen- erate non-factual completion. Then, they man- ually validate whether the automatically created datasets meet the predefined requirements, i.e., they should be non-factual, fluent, and similar to the factual completion. To construct knowledge creation task, Yu et al. (2023a) build an annota-
2309.01219#21
2309.01219#23
2309.01219
[ "2307.03109" ]
2309.01219#23
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
7 Task Format Sample Question Answering Question: The DutchBelgian television series that â House of Anubiâ was based on first aired in what year? Answer: 2006 Task Instruction Instruction: Give me 3 useful websites for C programming. Response: 1. GeeksforGeeks: This website provides tutorials and practice problems on C pro- gramming. 2. Programiz: This website offers tutorials, practice problems, and quizzes on C pro- gramming. 3. Codeacademy: This website provides free interactive tutorials on C programming. Text Completion Context: â Sorryâ
2309.01219#22
2309.01219#24
2309.01219
[ "2307.03109" ]
2309.01219#24
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
is a song by American singer Madonna from her tenth studio album Confessions on a Dance Floor (2005). It was written and produced by Madonna and Stuart Price, and released as the second single from the album on February 7, 2006. It later appeared on Celebration, her 2009 greatest hits album. An uptempo dance song, â Sorryâ was one of the first tracks developed for the album and had numerous remix treatments before the ultimate version of the track was finalized. Completion: One of the remixes was done by the known band the Pet Shop Boys, featuring added lyrics by the band Table 5: Illustrative examples for the task format where existing benchmarks evaluate hallucinations. tion platform to facilitate fine-grained event anno- tations. # 3.2 Evaluation Metrics The free-form and open-ended nature of language generation makes it difficult to evaluate the hal- lucinations produced by LLMs. The most com- monly used and reliable methods for evaluating hallucinations rely on human experts following specific principles (Lin et al., 2021; Lee et al., 2022; Min et al., 2023; Li et al., 2023a). It is worth noting that although existing benchmarks use hu- man evaluation to ensure reliability, they also seek to support automatic methods to facilitate effi- cient and consistent evaluation. Human evaluation. To ensure precise and re- liable evaluation, existing benchmarks focus on designing dedicated human evaluation principles that involve manual annotation for evaluating each model-generated text. TruthfulQA (Lin et al., 2021) proposes a human-annotation guideline, which instructs annotators to assign one of thir- teen qualitative labels to the model output and ver- ify answers by consulting a reliable source. Lee et al. (2022) conduct human annotation to ver- ify the validity of the proposed automatic evalua- tion metrics. FactScore (Min et al., 2023) requires annotators to assign three labels to each atomic
2309.01219#23
2309.01219#25
2309.01219
[ "2307.03109" ]
2309.01219#25
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
fact: "Supported" or "Not-supported" for facts that are supported or unsupported by the knowledge source, and "Irrelevant" for statements that are not related to the prompt. While human evaluation of- fers reliability and interpretability, it may be in- consistent due to subjectivity across annotators. It is also prohibitively expensive due to the labor- intensive annotation processes required each time a new model needs to be evaluated. Model-based automatic evaluation. Several studies (Lin et al., 2021; Min et al., 2023; Zha et al., 2023; Mündler et al., 2023) have devised model-based methods as a proxy for human eval- uation. Specifically, TruthfulQA (Lin et al., 2021) trains a GPT-3-6.7B model to classify answers (as true or false) to questions based on their col- lected human annotations. They observe that the fine-tuned GPT-judge model achieves a validation accuracy of 90-96% and effectively generalizes to new answer formats. AlignScore (Zha et al., 2023) establishes a unified function to evaluate the factual consistency between two texts. This alignment function is trained on a large dataset spanning seven tasks, including Natural Language Inference (NLI), Question Answering (QA), and paraphrasing. Differently, Min et al. (2023) and Mündler et al. (2023) harness the capabilities of off-the-shelf models to serve as automatic evalu-
2309.01219#24
2309.01219#26
2309.01219
[ "2307.03109" ]
2309.01219#26
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
8 ators. In particular, FactScore (Min et al., 2023) begins by employing a passage retriever, such as Generalizable T5-based Retrievers (Ni et al., 2022), to gather pertinent information. Subse- quently, an evaluation model, such as LLaMA- 65B (Touvron et al., 2023a), uses the retrieved knowledge to determine the truthfulness of a state- ment. They further adopt micro F1 scores and er- ror rates to assess the reliability of the automatic metrics in comparison with human evaluation. Mündler et al. (2023) design dedicated prompts to query an evaluator LLM (e.g., ChatGPT (OpenAI, 2023a)) whether the subjective LLM contradicts itself under the same context, and report classifi- cation metrics, including precision, recall, and F1 score. Rule-based automatic evaluation. For discrim- ination benchmarks (Li et al., 2023a; Muhlgay et al., 2023), common rule-based classification metrics such as accuracy can be directly applied to evaluating the ability of LLMs to discriminate factual statements from non-factual ones. Bang et al. (2023) also compute accuracy to reflect the modelâ s ability to identify misinformation on sci- entific and social claims related to COVID-19. In contrast, another line of research (Lee et al., 2022; Yu et al., 2023a) focuses on devising heuristic methods specifically designed for assessing hal- lucination. FactualityPrompt (Lee et al., 2022) combines named-entity-based metric and textual entailment-based metric to capture different as- pects of factuality. To evaluate knowledge cre- ation, Yu et al. (2023a) devise a self-contrast met- ric to quantify model consistency in generating factual statements. They accomplish this by com- paring model-generated texts with and without in- cluding golden knowledge as part of the prompts based on Rouge-L (F1) (Lin, 2004).
2309.01219#25
2309.01219#27
2309.01219
[ "2307.03109" ]
2309.01219#27
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
# 4 Sources of LLM Hallucination In this section, we aim to explore the various fac- tors that can induce hallucinations within LLMs. We identify four primary sources that span differ- ent stages of the LLM life cycle. LLMs lack relevant knowledge or internalize false knowledge. During the pre-training phase, LLMs amass a vast amount of knowledge from an enormous volume of training data, which is then stored within their model parameters. When asked to answer questions or complete tasks, LLMs of-
2309.01219#26
2309.01219#28
2309.01219
[ "2307.03109" ]
2309.01219#28
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
9 ten exhibit hallucinations if they lack pertinent knowledge or have internalized false knowledge from the training corpora. Li et al. (2022c) discover that LLMs sometimes misinterpret spurious correlations, such as posi- tionally close or highly co-occurring associations, as factual knowledge. Specifically, McKenna et al. (2023) investigate the hallucination prob- lem within the context of the natural language inference (NLI) task and find a strong correla- tion between LLM hallucination and the distri- bution of the training data. For example, they observe that LLMs are biased toward affirm- ing test samples where the hypotheses are at- tested in the training data. Besides, Dziri et al. (2022) argue that hallucination is also present in human-generated corpora (can be reflected as out- dated (Liska et al., 2022; Luu et al., 2022), bi- ased (Chang et al., 2019; Garrido-Muñoz et al., 2021), or fabricated (Penedo et al., 2023) expres- sion). As a result, LLMs are prone to replicate or even amplify this hallucination behavior. Wu et al. (2023b) reveal that the memorizing and reason- ing performance of PLMs for ontological knowl- edge is less than perfect. Sun et al. (2023a) put forward a benchmark named Head-to-Tail to eval- uate the factual knowledge of LLMs for entities with different levels of popularity. Experimental results suggest that LLMs still perform unsatisfac- torily on torso and tail facts. Furthermore, Zheng et al. (2023c) identified two additional abilities as- sociated with knowledge memorization that en- able LLMs to provide truthful answers: knowledge recall and knowledge reasoning. Deficiencies in either of these abilities can lead to hallucinations. LLMs sometimes overestimate their capacities. Some studies have been conducted with the aim of understanding whether language models can assess the accuracy of their responses and rec- ognize their knowledge boundaries. Kadavath et al. (2022) conduct experiments that demon- strate LLMsâ
2309.01219#27
2309.01219#29
2309.01219
[ "2307.03109" ]
2309.01219#29
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
ability to evaluate the correctness of their own responses (self-evaluation) and de- termine whether they know the answer to a given the question. However, for very large LLMs, distribution entropy of correct and incorrect an- swers could be similar, suggesting that LLMs are equally confident when generating incorrect an- swers as they are generating correct ones. Yin et al. (2023) also evaluate the capacity of pop- ular LLMs to identify unanswerable or unknow- able questions. Their empirical study reveals that even the most advanced LLM, GPT4 (OpenAI, 2023b), shows a significant performance gap when compared to humans. Ren et al. (2023) note a correlation between accuracy and confidence, but such confidence often surpasses the actual capa- bilities of LLMs, namely over-confidence. In gen- eral, LLMsâ understanding of factual knowledge boundaries may be imprecise, and they frequently exhibit over-confidence. Such over-confidence misleads LLMs to fabricate answers with unwar- ranted certainty. Problematic alignment process could mislead LLMs into hallucination. LLMs typically un- dergo an alignment process following pre-training, where they receive further training on curated instruction-following examples to align their re- sponses with human preferences. However, when trained on instructions for which LLMs have not acquired prerequisite knowledge from the pre- training phase, this is actually a misalignment pro- cess that encourages LLMs to hallucinate (Gold- berg, 2023; Schulman, 2023). Another potential issue is sycophancy, where LLMs may generate responses that favor the userâ s perspective rather than providing correct or truthful answers, which can result in hallucination (Perez et al., 2022; Rad- hakrishnan et al., 2023; Wei et al., 2023b). The generation strategy employed by LLMs has potential risks.
2309.01219#28
2309.01219#30
2309.01219
[ "2307.03109" ]
2309.01219#30
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Todayâ s most advanced LLMs generate responses sequentially, outputting one token at a time. Zhang et al. (2023a) discover that LLMs sometimes over-commit to their early mistakes, even when they recognize they are in- correct. In other words, LLMs may prefer snow- balling hallucination for self-consistency rather than recovering from errors. This phenomenon is known as hallucination snowballing. Azaria and Mitchell (2023) also contend that local opti- mization (token prediction) does not necessarily ensure global optimization (sequence prediction), and early local predictions may lead LLMs into situations where it becomes challenging to formu- late a correct response. Lee et al. (2022) highlight that the randomness introduced by sampling-based generation strategies, such as top-p and top-k, can also be a potential source of hallucination.
2309.01219#29
2309.01219#31
2309.01219
[ "2307.03109" ]
2309.01219#31
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
10 LLM Pre-train Data Size GLM (Zeng et al., 2022) BLOOM (Scao et al., 2022) GPT-3 (Brown et al., 2020) LLaMA (Touvron et al., 2023a) Llama 2 (Touvron et al., 2023b) 400B tokens 366B tokens 300B tokens 1.4T tokens 2T tokens Table 6: The pre-training data size of popular LLMs. # 5 Mitigation of LLM Hallucination In this section, we provide an extensive review of recent studies focused on mitigating LLM halluci- nations. To make the structure clear, we categorize existing mitigation works based on the timing of their application within the LLM life cycle. # 5.1 Mitigation during Pre-training Existing work (Zhou et al., 2023a) argues that the knowledge of LLMs is mostly acquired during the pre-training phase. The presence of noisy data such as misinformation in the pre-training corpus could corrupt the parametric knowledge of LLMs, which is a significant factor contributing to hallu- cinations, as previously discussed in § 4. Akyürek et al. (2022) also demonstrate that it is possible to trace the factual knowledge acquired by language models back to their training data. Consequently, an intuitive approach to mitigating hallucinations could involve manually or automatically curating the pre-training corpus to minimize unverifiable or unreliable data as much as possible. Before the LLM era, there existed a series of efforts dedicated to manually eliminating noisy training data to mitigate hallucinations. For in- stance, Gardent et al. (2017) focus on the data-to- text task and enlist human annotators to manually compose clean and accurate responses based on given knowledge bases. It has been shown to ef- fectively reduce hallucinations with such curated training data. Similarly, Wang (2019) manually refine the text in existing table-to-text datasets and observe that this process also substantially alle- viates fact hallucinations. Besides, Parikh et al. (2020) instruct annotators to revise verified sen- tences from Wikipedia rather than directly creat- ing new sentences when constructing table-to-text training data. This approach has also been proven to result in improved factuality of results.
2309.01219#30
2309.01219#32
2309.01219
[ "2307.03109" ]
2309.01219#32
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
With the advent of the LLM era, curating train- ing data during pre-training has become increas- ingly challenging due to the vast scale of pre- training corpora (as exemplified in Table 6). For SFT Dataset Data Size Alpaca (Taori et al., 2023) GPT4-Alpaca (Peng et al., 2023b) Baize (Xu et al., 2023) Dolly (Conover et al., 2023) Open-assistant (Köpf et al., 2023) LIMA (Zhou et al., 2023a) 52k samples 52k samples 210k samples 15k samples 34k samples 1k samples Table 7: The size of popular SFT datasets. instance, Llama 2 (Touvron et al., 2023b) conducts pre-training on about two trillion tokens. There- fore, compared to manual curation, a more practi- cal approach today could be automatically select- ing reliable data or filtering out noisy data. For example, the pre-training data of GPT-3 (Brown et al., 2020) is cleaned by using similarity to a range of high-quality reference corpora. The de- velopers of Falcon (Penedo et al., 2023) carefully extract high-quality data from the web via heuris- tic rules and prove that properly curated pertaining corpora lead to powerful LLMs. Li et al. (2023f) propose phi-1.5, a 1.3 billion parameter LLMs pre-trained on filtered â
2309.01219#31
2309.01219#33
2309.01219
[ "2307.03109" ]
2309.01219#33
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
textbook-likeâ synthetic data, which exhibits many traits of much larger LLMs. In order to mitigate hallucinations, current LLMs tend to collect pre-training data from credi- ble text sources. The developers of Llama 2 (Tou- vron et al., 2023b) strategically up-sample data from highly factual sources, such as Wikipedia, when constructing the pre-training corpus. Lee et al. (2022) propose to prepend the topic pre- fix to sentences in the factual documents to make each sentence serve as a standalone fact during pre-training. Concretely, they treat the document name as the topic prefix and observe this method improves LMsâ performance on TruthfulQA. Summary & Discussion. The mitigation of hal- lucinations during pre-training is primarily cen- tred around the curation of pre-training corpora. Given the vast scale of existing pre-training cor- pora, current studies predominantly employ sim- ple heuristic rules for data selection and filtering. A potential avenue for exploration could be devis- ing more effective selection or filtering strategies.
2309.01219#32
2309.01219#34
2309.01219
[ "2307.03109" ]
2309.01219#34
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
# 5.2 Mitigation during SFT As a common practice, current LLMs collec- tively undergo the process known as supervised fine-tuning (SFT) to elicit their knowledge ac- quired from pre-training and learn how to inter- act with users (Wang et al., 2023c; Zhang et al., 11 Teach LLMs to hallucinate =) Parametric Knowledge icles SFT Data Figure 3: The SFT data usually contains samples that exceed LLMsâ parametric knowledge, which may re- sult in hallucinations. 2023b). SFT generally involves first annotating or collecting massive-task instruction-following data (Chung et al., 2022; Taori et al., 2023), followed by fine-tuning pre-trained foundational LLMs on this data using maximum likelihood es- timation (MLE) (Wei et al., 2021). By employing well-designed SFT strategies, many recent stud- ies claim to have built LLMs that achieve perfor- mance on par with ChatGPT (Wang et al., 2023b). Similar to pre-training, one potential approach to reduce hallucination during the SFT stage could be curating the training data. Given the rela- tively small volume of SFT data (refer to Table 7), both manual and automatic curation are viable options here. Zhou et al. (2023a) have meticu- lously constructed an instruction-tuning dataset, comprising 1,000 samples annotated by human ex- perts. Some other studies (Chen et al., 2023b; Cao et al., 2023; Lee et al., 2023) have employed an automatic selection of high-quality instruction- tuning data, by leveraging LLMs as evaluators or designing specific rules. Experimental results on hallucination-related benchmarks, such as Truth- fulQA (Lin et al., 2021), suggest that LLMs fine- tuned on such curated instruction data demonstrate higher levels of truthfulness and factuality com- pared to LLMs fine-tuned on uncurated data. Fur- thermore, Mohamed et al. (2023) propose the inte- gration of domain-specific knowledge sets into the SFT data, which aims to reduce hallucinations that arise from a lack of relevant knowledge.
2309.01219#33
2309.01219#35
2309.01219
[ "2307.03109" ]
2309.01219#35
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
It is worth noting that Schulman (2023) under- scored a potential risk of the SFT process that it could induce hallucination from LLMs due to behavior cloning. Behavior cloning is a concept in reinforcement learning (Torabi et al., 2018), which means the model learns directly from im- itating the expertâ s actions. The problem here is that this method simply mimics behavior without learning a strategy to achieve the final goal. The SFT process of LLMs can be viewed as a spe- cial case of behavior cloning, where LLMs learn the format and style of interaction by mimicking humans. As for LLMs, despite having encoded a substantial amount of knowledge into their pa- rameters, there remains knowledge that surpasses their capacity (Yin et al., 2023; Ren et al., 2023). By cloning human behaviors during SFT, LLMs learn to respond to all questions with a predom- inantly positive tone, without assessing whether these questions exceed their knowledge bound- aries (see Figure 3). As a result, during inference, if prompted to answer questions related to un- learned knowledge, they are likely to confidently produce hallucinations. One way to remit this problem can be the honesty-oriented SFT, which means introducing some honest samples into the SFT data. The honest samples refer to responses that admit incompetence, such as â Sorry, I donâ t knowâ .
2309.01219#34
2309.01219#36
2309.01219
[ "2307.03109" ]