id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2309.05922#6
A Survey of Hallucination in Large Foundation Models
However, (Zhang et al., 2023a) pro- posed a hypothesis that in certain instances when language models attempt to rationalize previously generated hallucinations, they may produce false statements that they can independently identify as inaccurate. Thus, they created three question- answering datasets where ChatGPT and GPT-4 fre- quently provide incorrect answers and accompany them with explanations that contain at least one false assertion. HaluEval (Li et al., 2023b), is a comprehensive benchmark designed for evaluating hallucination in LLMs. It serves as a tool to systematically as- sess LLMsâ performance in terms of hallucination LLMs Li et al. (2023b); Mündler et al. (2023); Zhang et al. (2023b); Peng et al. (2023); Li et al. (2023d); Elaraby et al. (2023); Jha et al. (2023); McKenna et al. (2023); Varshney et al. (2023); Text Huang and Chang (2023); Luo et al. (2023); Gao et al. (2023) Multilingual LLMs Pfeiffer et al. (2023); Cui et al. (2023) Image Video Domain- specific LLMs Medical: Umapathi et al. (2023), Law: Cui et al. (2023) Li et al. (2023e); Gunjal et al. (2023); Wu et al. (2023) Himakunthala et al. (2023); Kulal et al. (2023); Li et al. (2023c); Yu et al. (2023); Liu and Wan (2023) Audio Doh et al. (2023); Li et al. (2023a) # Hallucination in Large Foundation Models Figure 1: Taxonomy for Hallucination in Large Foundation Models
2309.05922#5
2309.05922#7
2309.05922
[ "2307.12168" ]
2309.05922#7
A Survey of Hallucination in Large Foundation Models
@ Text @ Image @ Video @ Audio e o $2 a £-) o1 a £ 5 20 + t a Pw a ww =e? e at Â¥ 4s Â¥ sb Â¥ aor® po ws yor? ys we oot Figure 2: The evolution of â hallucinationâ papers for Large Foundation Models (LFMs) from March 2023 to September 2023. User: | am writing a paper about the drug Skytrofa. GPT-3.5: That sounds interesting! Skytrofa (molgradex) is a medication developed for the treatment of autoimmune pulmonary alveolar proteinosis (aPAP). It's a rare lung disorder where a type of protein called surfactant accumulates in the air sacs of the lungs, leading to breathing difficulties.
2309.05922#6
2309.05922#8
2309.05922
[ "2307.12168" ]
2309.05922#8
A Survey of Hallucination in Large Foundation Models
Figure 3: An illustration of hallucination (Luo et al., 2023). Incorrect information is highlighted in Red. across various domains and languages, helping re- searchers and developers gauge and improve the reliability of these models. for mitigating language model hallucination Their proposed approach focuses on aligning generated text with relevant factual knowledge, enabling users to interactively guide the modelâ s responses to produce more accurate and reliable informa- tion. This technique aims to improve the qual- ity and factuality of language model outputs by involving users in the alignment process. LLM- AUGMENTER (Peng et al., 2023) improves LLMs using external knowledge and automated feedback. It highlights the need to address the limitations and potential factual errors in LLM-generated content. This method involves incorporating external knowl- edge sources and automated feedback mechanisms to enhance the accuracy and reliability of LLM outputs. By doing so, the paper aims to mitigate factual inaccuracies and improve the overall qual- ity of LLM-generated text. Similarly, (Li et al., 2023d) introduces a framework called â Chain of Knowledgeâ for grounding LLMs with structured knowledge bases. Grounding refers to the process of connecting LLM-generated text with structured knowledge to improve factual accuracy and reliabil- ity. The framework utilizes a hierarchical approach, chaining multiple knowledge sources together to provide context and enhance the understanding of LLMs. This approach aims to improve the align- ment of LLM-generated content with structured knowledge, reducing the risk of generating inaccu- rate or hallucinated information. Hallucination mitigation using external knowl- edge: Using interactive question-knowledge alignment, (Zhang et al., 2023b) presents a method Smaller, open-source LLMs with fewer param- eters often experience significant hallucination is- sues compared to their larger counterparts (Elaraby et al., 2023). This work focuses on evaluating and mitigating hallucinations in BLOOM 7B, which represents weaker open-source LLMs used in re- search and commercial applications. They intro- duce HALOCHECK, a lightweight knowledge-free framework designed to assess the extent of halluci- nations in LLMs. Additionally, it explores methods like knowledge injection and teacher-student ap- proaches to reduce hallucination problems in low- parameter LLMs.
2309.05922#7
2309.05922#9
2309.05922
[ "2307.12168" ]
2309.05922#9
A Survey of Hallucination in Large Foundation Models
Moreover, the risks associated with LLMs can be mitigated by drawing parallels with web systems (Huang and Chang, 2023). It highlights the absence of a critical element, â citation,â in LLMs, which could improve content transparency, and verifiabil- ity, and address intellectual property and ethical concerns. Hallucination mitigation using prompting tech- niques: â Dehallucinatingâ refers to reducing the generation of inaccurate or hallucinated informa- tion by LLMs. Dehallucinating LLMs using formal methods guided by iterative prompting is presented in (Jha et al., 2023). They employ formal methods to guide the generation process through iterative prompts, aiming to improve the accuracy and reli- ability of LLM outputs. This method is designed to mitigate the issues of hallucination and enhance the trustworthiness of LLM-generated content. # 2.2 Multilingual LLMs Large-scale multilingual machine translation sys- tems have shown impressive capabilities in directly translating between numerous languages, making them attractive for real-world applications. How- ever, these models can generate hallucinated trans- lations, which pose trust and safety issues when deployed. Existing research on hallucinations has mainly focused on small bilingual models for high- resource languages, leaving a gap in understanding hallucinations in massively multilingual models across diverse translation scenarios. To address this gap, (Pfeiffer et al., 2023) con- ducted a comprehensive analysis on both the M2M family of conventional neural machine translation models and ChatGPT, a versatile LLM that can be prompted for translation. The investigation cov- ers a wide range of conditions, including over 100 translation directions, various resource levels, and languages beyond English-centric pairs. # 2.3 Domain-specific LLMs Hallucinations in mission-critical areas such as medicine, banking, finance, law, and clinical set- tings refer to instances where false or inaccurate information is generated or perceived, potentially leading to serious consequences. In these sectors, reliability and accuracy are paramount, and any form of hallucination, whether in data, analysis, or decision-making, can have significant and detri- mental effects on outcomes and operations. Conse- quently, robust measures and systems are essential to minimize and prevent hallucinations in these high-stakes domains. Medicine:
2309.05922#8
2309.05922#10
2309.05922
[ "2307.12168" ]
2309.05922#10
A Survey of Hallucination in Large Foundation Models
The issue of hallucinations in LLMs, particularly in the medical field, where generating plausible yet inaccurate information can be detri- mental. To tackle this problem, (Umapathi et al., 2023) introduces a new benchmark and dataset called Med-HALT (Medical Domain Hallucination Test). It is specifically designed to evaluate and mitigate hallucinations in LLMs. It comprises a diverse multinational dataset sourced from med- ical examinations across different countries and includes innovative testing methods. Med-HALT consists of two categories of tests: reasoning and memory-based hallucination tests, aimed at assess- ing LLMsâ problem-solving and information re- trieval capabilities in medical contexts. Law: ChatLaw (Cui et al., 2023), is an open- source LLM specialized for the legal domain. To ensure high-quality data, the authors created a meticulously designed legal domain fine-tuning dataset. To address the issue of model halluci- nations during legal data screening, they propose a method that combines vector database retrieval with keyword retrieval. This approach effectively reduces inaccuracies that may arise when solely relying on vector database retrieval for reference data retrieval in legal contexts. # 3 Hallucination in Large Image Models Contrastive learning models, employing a Siamese structure (Wu et al., 2023), have displayed impres- sive performance in self-supervised learning. Their success hinges on two crucial conditions: the pres- ence of a sufficient number of positive pairs and the existence of ample variations among them. With- out meeting these conditions, these frameworks may lack meaningful semantic distinctions and be- come susceptible to overfitting. To tackle these Instruction-based evaluation POPE Random settings a Provide a detailed description le of the given image. 1 qm | |sthere a tree in the image? â a a ob â â â â
2309.05922#9
2309.05922#11
2309.05922
[ "2307.12168" ]
2309.05922#11
A Survey of Hallucination in Large Foundation Models
i Yes, there is a tree in the The image features a person) YB, | So Ree HANS ESE 3 standing on a sandy beach, oat Popular settings holding a colorful striped Le umbrella to provide shade | @| Is there a person in the image? from the sun. The umbrella H | is positioned towards the left Yes, there is a person in the image. | â > side of the person, covering : eeenoeEemwue a significant portion of their ' Adversarial settings body. The person appears to ie a ioctmyinibanimaatnira i als there a boat in the image? beach, possibly looking out ' aiftaqesm : Yes, there is a boat in the image. & Figure 4: Instances of object hallucination within LVLMs (Li et al., 2023e). Ground-truth objects in annotations are indicated in bold, while red objects represent hallucinated objects by LVLMs. The left case occurs in the conventional instruction-based evaluation approach, while the right cases occur in three variations of POPE. challenges, we introduce the Hallucinator, which efficiently generates additional positive samples to enhance contrast. The Hallucinator is differ- entiable, operating in the feature space, making it amenable to direct optimization within the pre- training task and incurring minimal computational overhead. Efforts to enhance LVLMs for complex multi- modal tasks, inspired by LLMs, face a significant challenge: object hallucination, where LVLMs gen- erate inconsistent objects in descriptions. This study (Li et al., 2023e) systematically investigates object hallucination in LVLMs and finds itâ
2309.05922#10
2309.05922#12
2309.05922
[ "2307.12168" ]
2309.05922#12
A Survey of Hallucination in Large Foundation Models
s a common issue. Visual instructions, especially fre- quently occurring or co-occurring objects, influ- ence this problem. Existing evaluation methods are also affected by input instructions and LVLM generation styles. To address this, the study intro- duces an improved evaluation method called POPE, providing a more stable and flexible assessment of object hallucination in LVLMs. Instruction-tuned Large Vision Language Mod- els (LVLMs) have made significant progress in han- dling various multimodal tasks, including Visual Question Answering (VQA). However, generating detailed and visually accurate responses remains a challenge for these models. Even state-of-the- art LVLMs like InstructBLIP exhibit a high rate of hallucinatory text, comprising 30 percent of non-existent objects, inaccurate descriptions, and erroneous relationships. To tackle this issue, the study (Gunjal et al., 2023)introduces MHalDetect1, a Multimodal Hallucination Detection Dataset de- signed for training and evaluating models aimed at detecting and preventing hallucinations. M- HalDetect contains 16,000 finely detailed anno- tations on VQA examples, making it the first com- prehensive dataset for detecting hallucinations in detailed image descriptions. # 4 Hallucination in Large Video Models Hallucinations can occur when the model makes in- correct or imaginative assumptions about the video frames, leading to the creation of artificial or erro- neous visual information Fig. 5. Video content: Caption 1:
2309.05922#11
2309.05922#13
2309.05922
[ "2307.12168" ]
2309.05922#13
A Survey of Hallucination in Large Foundation Models
A woman is throwing darts at a board. She throws them at a board. She jumps off into the distance and smiles. Caption 2: A man is seen standing in a room and leads into a man speaking to the camera. The man is throwing darts at a dart board . The man then throws the dart board and then goes back to the camera. Caption 3: A man in a white shirt is standing at a dart board. He throws a dart at the end. Figure 5: A video featuring three captions generated by various captioning models (Liu and Wan, 2023), with factual errors highlighted in red italics. The challenge of understanding scene affor- dances is tackled by introducing a method for inserting people into scenes in a lifelike manner (Kulal et al., 2023). Using an image of a scene with a marked area and an image of a person, the model seamlessly integrates the person into the scene while considering the sceneâ s characteristics. The model is capable of deducing realistic poses based on the scene context, adjusting the personâ s pose accordingly, and ensuring a visually pleasing composition. The self-supervised training enables the model to generate a variety of plausible poses while respecting the sceneâ s context. Additionally, the model can also generate lifelike people and scenes on its own, allowing for interactive editing. VideoChat (Li et al., 2023c), is a comprehen- sive system for understanding videos with a chat- oriented approach. VideoChat combines founda- tional video models with LLMs using an adaptable neural interface, showcasing exceptional abilities in understanding space, time, event localization, and inferring cause-and-effect relationships. To fine-tune this system effectively, they introduced a dataset specifically designed for video-based in- struction, comprising thousands of videos paired with detailed descriptions and conversations. This dataset places emphasis on skills like spatiotempo- ral reasoning and causal relationships, making it a valuable resource for training chat-oriented video understanding systems. Recent advances in video inpainting have been notable (Yu et al., 2023), particularly in cases where explicit guidance like optical flow can help propagate missing pixels across frames. However, challenges arise when cross-frame information is lacking, leading to shortcomings. So, instead of borrowing pixels from other frames, the model fo- cuses on addressing the reverse problem.
2309.05922#12
2309.05922#14
2309.05922
[ "2307.12168" ]
2309.05922#14
A Survey of Hallucination in Large Foundation Models
This work introduces a dual-modality-compatible inpainting framework called Deficiency-aware Masked Trans- former (DMT). Pretraining an image inpainting model to serve as a prior for training the video model has an advantage in improving the handling of situations where information is deficient. Video captioning aims to describe video events using natural language, but it often introduces fac- tual errors that degrade text quality. While fac- tuality consistency has been studied extensively in text-to-text tasks, it received less attention in vision-based text generation. In this research (Liu and Wan, 2023), the authors conducted a thorough human evaluation of factuality in video caption- ing, revealing that 57.0% of model-generated sen- tences contain factual errors. Existing evaluation metrics, mainly based on n-gram matching, do not align well with human assessments. To address this issue, they introduced a model-based factuality metric called FactVC, which outperforms previous metrics in assessing factuality in video captioning. # 5 Hallucination in Large Audio Models Automatic music captioning, which generates text descriptions for music tracks, has the potential to enhance the organization of vast musical data. However, researchers encounter challenges due to the limited size and expensive collection process of existing music-language datasets. To address this scarcity, (Doh et al., 2023) used LLMs to gener- ate descriptions from extensive tag datasets. They created a dataset known as LP-MusicCaps, com- prising around 2.2 million captions paired with 0.5 million audio clips. They also conducted a comprehensive evaluation of this large-scale mu- sic captioning dataset using various quantitative natural language processing metrics and human assessment. They trained a transformer-based mu- sic captioning model on this dataset and evaluated its performance in zero-shot and transfer-learning scenarios. Ideally, the video should enhance the audio, and in (Li et al., 2023a), they have used an advanced language model for data augmentation without hu- man labeling. Additionally, they utilized an audio encoding model to efficiently adapt a pre-trained text-to-image generation model for text-to-audio generation.
2309.05922#13
2309.05922#15
2309.05922
[ "2307.12168" ]
2309.05922#15
A Survey of Hallucination in Large Foundation Models
# 6 Hallucination is not always harmful: A different perspective Suggesting an alternative viewpoint, (Wiggers, 2023) discusses how hallucinating models could serve as â collaborative creative partners,â offering outputs that may not be entirely grounded in fact but still provide valuable threads to explore. Lever- aging hallucination creatively can lead to results or novel combinations of ideas that might not readily occur to most individuals. â Hallucinationsâ become problematic when the statements generated are factually inaccurate or contravene universal human, societal, or particular cultural norms. This is especially critical in situ- ations where an individual relies on the LLM to provide expert knowledge. However, in the con- text of creative or artistic endeavors, the capacity to generate unforeseen outcomes can be quite ad- vantageous. Unexpected responses to queries can surprise humans and stimulate the discovery of novel idea connections.
2309.05922#14
2309.05922#16
2309.05922
[ "2307.12168" ]
2309.05922#16
A Survey of Hallucination in Large Foundation Models
# T X E T Title SELFCHECKGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models (Manakul et al., 2023) HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models (Li et al., 2023b) Self-contradictory Hallucinations of Large Language Models: Evalua- tion, Detection and Mitigation (Mündler et al., 2023) PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions (Chen et al., 2023) Mitigating Language Model Hallucination with Interactive Question- Knowledge Alignment (Zhang et al., 2023b) How Language Model Hallucinations Can Snowball (Zhang et al., 2023a) Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback (Peng et al., 2023) ChatLawLLM (Cui et al., 2023) The Internal State of an LLM Knows When its Lying (Azaria and Mitchell, 2023) Chain of Knowledge: A Framework for Grounding Large Language Models with Structured Knowledge Bases (Li et al., 2023d) HALO: Estimation and Reduction of Hallucinations in Open-Source Weak Large Language Models (Elaraby et al., 2023) A Stitch in Time Saves Nine:
2309.05922#15
2309.05922#17
2309.05922
[ "2307.12168" ]
2309.05922#17
A Survey of Hallucination in Large Foundation Models
Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation (Varshney et al., 2023) Dehallucinating Large Language Models Using Formal Methods Guided Iterative Prompting (Jha et al., 2023) Detect Mitigate Task(s) QA QA, alogue Summa- rization, General Di- Text genera- tion Editing for Attribution Question- knowledge alignment QA Task ori- ented dialog and open- domain question answering QA Classificati- on Knowledge intensive tasks Consistency, Factuality, BS, NLI QA, Article gen- eration Dialog Dataset Manual (WikiBio) HaluEval Manual Multiple question answer- ing, Dialog datasets FuzzyQA Manual News Chat, Customer Service Manual Manual FEVER, AdvHot- potQA Manual on NBA domain WikiBio - Evaluation Metric Token proba- bility entropy or Automatic F1 score Attribution, Preserva- tion Attributable to Iden- tified Sources (Castaldo and Yang, 2007) Accuracy Knowledge F1 (KF1) and BLEU-4 ELO model ranking Accuracy Accuracy Pearson and Kendall tau relation coeffi- cients cor- Percentage of mit- igated hallucina- tions - | Med-HALT: Medical Domain Hallucination Test for Large Language Models (Umapathi et al., 2023) # Reasoning Hallucina- tion (RHT), Memory Hallucina- tion (MHT) # Test # Test | # Med- HALT
2309.05922#16
2309.05922#18
2309.05922
[ "2307.12168" ]
2309.05922#18
A Survey of Hallucination in Large Foundation Models
Sources of Hallucination by Large Language Models on Inference Tasks (McKenna et al., 2023) | # x # Textual en- tailment | # Altered direc- tional inference adatset # Hallucinations in Large Multilingual Translation Models (Pfeiffer et al., 2023) |â # v # MT # FLORES- 101, WMT, and TICO | # Accuracy, Pointwise score # Enatilment probabil- ity # spBLEU Table 1 continued from previous page Title Detect Mitigate Task(s) Dataset Evaluation Metric Citation: A Key to Building Responsible and Accountable Large Lan- guage Models (Huang and Chang, 2023) N/A N/A N/A Zero-resource hallucination prevention for large language models (Luo et al., 2023) Concept extraction, guessing, aggregation Concept- 7 AUC, ACC, F1, PEA RARR: Researching and Revising What Language Models Say, Using Language Models (Gao et al., 2023) Editing for Attribution NQ, SQA, QReCC Attributable Iden- to tified Sources (Castaldo and Yang, 2007) E G A M Evaluating Object Hallucination in Large Vision-Language Models (Li et al., 2023e) Image cap- tioning MSCOCO (Lin et al., 2014) Caption Halluci- nation Assess- ment with Image Rele- vance (CHAIR) (Rohrbach et al., 2018) I Detecting and Preventing Hallucinations in Large Vision Language Mod- els (Gunjal et al., 2023) Visual Question Answering (VQA) M- HalDetect Accuracy Plausible May Not Be Faithful: Probing Object Hallucination in Vision- Language Pre-training (Dai et al., 2022) Image cap- tioning CHAIR (Rohrbach et al., 2018) CIDEr Letâ s Think Frame by Frame: Evaluating Video Chain of Thought with Video Infilling and Prediction (Himakunthala et al., 2023) Video infill- ing, Scene prediction Manual N/A O E D I V Putting People in Their Place:
2309.05922#17
2309.05922#19
2309.05922
[ "2307.12168" ]
2309.05922#19
A Survey of Hallucination in Large Foundation Models
Affordance-Aware Human Insertion into Scenes (Kulal et al., 2023) VideoChat : Chat-Centric Video Understanding (Li et al., 2023c) Affordance prediction Visual dia- logue Manual (2.4M video clips) Manual FID, PCKh N/A Models See Hallucinations: Evaluating the Factuality in Video Caption- ing (Liu and Wan, 2023) Video cap- tioning ActivityNet Captions (Krishna et 2017), YouCook2 (Krishna et 2017) al., al., Factual consis- tency for Video Cap- tioning (FactVC) LP-MusicCaps: LLM-based pseudo music captioning (Doh et al., 2023) Audio Cap- tioning LP- MusicCaps
2309.05922#18
2309.05922#20
2309.05922
[ "2307.12168" ]
2309.05922#20
A Survey of Hallucination in Large Foundation Models
# O I D U A BLEU1 to 4 (B1, B2, B3, B4), ME- TEOR (M), and ROUGE- L (R-L) # Audio-Journey: Efficient Visual+LLM-aided Audio Encodec Diffusion (Li et al., 2023a) # | X # v # Classificati- on | # Manual # Mean average precision (mAP) Table 1: Summary of all the works related to hallucination in all four modalities of the large foundation models. Here, we have divided each work by the following factors: 1. Detection, 2. Mitigation, 3. Tasks, 4. Datasets, and 5. Evaluation metrics. # 7 Conclusion and Future Directions We concisely classify the existing research in the field of hallucination within LFMs. We provide an in-depth analysis of these LFMs, encompassing critical aspects including 1. Detection, 2. Miti- gation, 3. Tasks, 4. Datasets, and 5. Evaluation metrics.
2309.05922#19
2309.05922#21
2309.05922
[ "2307.12168" ]
2309.05922#21
A Survey of Hallucination in Large Foundation Models
Some possible future directions to address the hallucination challenge in the LFMs are given be- low. # 7.1 Automated Evaluation of Hallucination In the context of natural language processing and machine learning, hallucination refers to the gener- ation of incorrect or fabricated information by AI models. This can be a significant problem, espe- cially in applications like text generation, where the goal is to provide accurate and reliable informa- tion. Here are some potential future directions in the automated evaluation of hallucination: Development of Evaluation Metrics: Re- searchers can work on creating specialized evaluation metrics that are capable of detecting hallucination in generated content. These metrics may consider factors such as factual accuracy, coherence, and consistency. Advanced machine learning models could be trained to assess generated text against these metrics. Human-AI Collaboration: Combining human judgment with automated evaluation systems can be a promising direction. Crowdsourcing platforms can be used to gather human assessments of AI- generated content, which can then be used to train models for automated evaluation. This hybrid ap- proach can help in capturing nuances that are chal- lenging for automated systems alone. Adversarial Testing: Researchers can develop adversarial testing methodologies where AI sys- tems are exposed to specially crafted inputs de- signed to trigger hallucination. This can help in identifying weaknesses in AI models and improv- ing their robustness against hallucination. Fine-Tuning Strategies: Fine-tuning pre-trained language models specifically to reduce hallucina- tion is another potential direction. Models can be fine-tuned on datasets that emphasize fact-checking and accuracy to encourage the generation of more reliable content. # Improving Detection and Mitigation Strategies with Curated Sources of Knowledge
2309.05922#20
2309.05922#22
2309.05922
[ "2307.12168" ]
2309.05922#22
A Survey of Hallucination in Large Foundation Models
7.2 Detecting and mitigating issues like bias, misinfor- mation, and low-quality content in AI-generated text is crucial for responsible AI development. Cu- rated sources of knowledge can play a significant role in achieving this. Here are some future direc- tions: Knowledge Graph Integration: Incorporating knowledge graphs and curated knowledge bases into AI models can enhance their understanding of factual information and relationships between concepts. This can aid in both content generation and fact-checking. Fact-Checking and Verification Models: De- velop specialized models that focus on fact- checking and content verification. These models can use curated sources of knowledge to cross- reference generated content and identify inaccu- racies or inconsistencies. Bias Detection and Mitigation: Curated sources of knowledge can be used to train AI models to recognize and reduce biases in generated content. AI systems can be programmed to check content for potential biases and suggest more balanced al- ternatives. Active Learning: Continuously update and re- fine curated knowledge sources through active learning. AI systems can be designed to seek hu- man input and validation for ambiguous or new information, thus improving the quality of curated knowledge.
2309.05922#21
2309.05922#23
2309.05922
[ "2307.12168" ]
2309.05922#23
A Survey of Hallucination in Large Foundation Models
Ethical Guidelines and Regulation: Future di- rections may also involve the development of eth- ical guidelines and regulatory frameworks for the use of curated knowledge sources in AI develop- ment. This could ensure responsible and transpar- ent use of curated knowledge to mitigate potential risks. In summary, these future directions aim to ad- dress the challenges of hallucination detection and mitigation, as well as the responsible use of curated knowledge to enhance the quality and reliability of AI-generated content. They involve a combi- nation of advanced machine learning techniques, human-AI collaboration, and ethical considerations to ensure AI systems produce accurate and trust- worthy information.
2309.05922#22
2309.05922#24
2309.05922
[ "2307.12168" ]
2309.05922#24
A Survey of Hallucination in Large Foundation Models
# References Amos Azaria and Tom Mitchell. 2023. The internal state of an llm knows when its lying. arXiv preprint arXiv:2304.13734. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosse- lut, Emma Brunskill, et al. 2021. On the opportuni- ties and risks of foundation models. arXiv preprint arXiv:2108.07258. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â
2309.05922#23
2309.05922#25
2309.05922
[ "2307.12168" ]
2309.05922#25
A Survey of Hallucination in Large Foundation Models
1901. Eric T Castaldo and Edmund Y Yang. 2007. Severe sep- sis attributable to community-associated methicillin- resistant staphylococcus aureus: an emerging fatal problem. The American Surgeon, 73(7):684â 687. Anthony Chen, Panupong Pasupat, Sameer Singh, Hon- grae Lee, and Kelvin Guu. 2023. Purr: Efficiently editing language model hallucinations by denoising language model corruptions.
2309.05922#24
2309.05922#26
2309.05922
[ "2307.12168" ]
2309.05922#26
A Survey of Hallucination in Large Foundation Models
Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, and Li Yuan. 2023. Chatlaw: Open-source legal large language model with integrated external knowledge bases. arXiv preprint arXiv:2306.16092. Wenliang Dai, Zihan Liu, Ziwei Ji, Dan Su, and Pascale Fung. 2022. Plausible may not be faithful: Probing object hallucination in vision-language pre-training. arXiv preprint arXiv:2210.07688.
2309.05922#25
2309.05922#27
2309.05922
[ "2307.12168" ]
2309.05922#27
A Survey of Hallucination in Large Foundation Models
SeungHeon Doh, Keunwoo Choi, Jongpil Lee, and Juhan Nam. 2023. Lp-musiccaps: Llm-based pseudo music captioning. arXiv preprint arXiv:2307.16372. Mohamed Elaraby, Mengyin Lu, Jacob Dunn, Xuey- ing Zhang, Yu Wang, and Shizhu Liu. 2023. Halo: Estimation and reduction of hallucinations in open- source weak large language models. arXiv preprint arXiv:2308.11764. Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, et al. 2023.
2309.05922#26
2309.05922#28
2309.05922
[ "2307.12168" ]
2309.05922#28
A Survey of Hallucination in Large Foundation Models
Rarr: Researching and revising what language models say, using language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16477â 16508. Anisha Gunjal, Jihan Yin, and Erhan Bas. 2023. De- tecting and preventing hallucinations in large vision language models. arXiv preprint arXiv:2308.06394. Vaishnavi Himakunthala, Andy Ouyang, Daniel Rose, Ryan He, Alex Mei, Yujie Lu, Chinmay Sonar, Michael Saxon, and William Yang Wang. 2023.
2309.05922#27
2309.05922#29
2309.05922
[ "2307.12168" ]
2309.05922#29
A Survey of Hallucination in Large Foundation Models
Letâ s think frame by frame: Evaluating video chain of thought with video infilling and prediction. arXiv preprint arXiv:2305.13903. Jie Huang and Kevin Chen-Chuan Chang. 2023. Ci- tation: A key to building responsible and ac- countable large language models. arXiv preprint arXiv:2307.02185. Susmit Jha, Sumit Kumar Jha, Patrick Lincoln, Nathaniel D Bastian, Alvaro Velasquez, and Sandeep Neema. 2023. Dehallucinating large language mod- els using formal methods guided iterative prompting. In 2023 IEEE International Conference on Assured Autonomy (ICAA), pages 149â 152. IEEE. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of halluci- nation in natural language generation. ACM Comput- ing Surveys, 55(12):1â 38. Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-captioning In Proceedings of the IEEE in- events in videos. ternational conference on computer vision, pages 706â
2309.05922#28
2309.05922#30
2309.05922
[ "2307.12168" ]
2309.05922#30
A Survey of Hallucination in Large Foundation Models
715. Sumith Kulal, Tim Brooks, Alex Aiken, Jiajun Wu, Jimei Yang, Jingwan Lu, Alexei A Efros, and Kr- ishna Kumar Singh. 2023. Putting people in their place: Affordance-aware human insertion into scenes. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pages 17089â 17099. Juncheng B Li, Jackson Sam Michaels, Laura Yao, Lijun Yu, Zach Wood-Doughty, and Florian Metze. 2023a.
2309.05922#29
2309.05922#31
2309.05922
[ "2307.12168" ]
2309.05922#31
A Survey of Hallucination in Large Foundation Models
Audio-journey: Efficient visual+ llm-aided audio en- codec diffusion. In Workshop on Efficient Systems for Foundation Models@ ICML2023. Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023b. Helma: A large- scale hallucination evaluation benchmark for large language models. arXiv preprint arXiv:2305.11747. KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wen- hai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. 2023c.
2309.05922#30
2309.05922#32
2309.05922
[ "2307.12168" ]
2309.05922#32
A Survey of Hallucination in Large Foundation Models
Videochat: Chat-centric video un- derstanding. arXiv preprint arXiv:2305.06355. Xingxuan Li, Ruochen Zhao, Yew Ken Chia, Bosheng Ding, Lidong Bing, Shafiq Joty, and Soujanya Po- ria. 2023d. Chain of knowledge: A framework for grounding large language models with structured knowledge bases. arXiv preprint arXiv:2305.13269. Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023e. Eval- uating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014.
2309.05922#31
2309.05922#33
2309.05922
[ "2307.12168" ]
2309.05922#33
A Survey of Hallucination in Large Foundation Models
Microsoft coco: Common objects in context. In Computer Visionâ ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740â 755. Springer. Hui Liu and Xiaojun Wan. 2023. Models see hallucina- tions: Evaluating the factuality in video captioning. arXiv preprint arXiv:2303.02961. Junyu Luo, Cao Xiao, and Fenglong Ma. 2023. Zero- resource hallucination prevention for large language models. arXiv preprint arXiv:2309.02654. Potsawee Manakul, Adian Liusie, and Mark J. F. Gales. 2023. Selfcheckgpt:
2309.05922#32
2309.05922#34
2309.05922
[ "2307.12168" ]
2309.05922#34
A Survey of Hallucination in Large Foundation Models
Zero-resource black-box hal- lucination detection for generative large language models. Nick McKenna, Tianyi Li, Liang Cheng, Moham- mad Javad Hosseini, Mark Johnson, and Mark Steed- man. 2023. Sources of hallucination by large lan- guage models on inference tasks. arXiv preprint arXiv:2305.14552. Niels Mündler, Jingxuan He, Slobodan Jenko, and Mar- tin Vechev. 2023. Self-contradictory hallucinations of large language models: Evaluation, detection and mitigation. Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al. 2023.
2309.05922#33
2309.05922#35
2309.05922
[ "2307.12168" ]
2309.05922#35
A Survey of Hallucination in Large Foundation Models
Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813. Jonas Pfeiffer, Francesco Piccinno, Massimo Nicosia, Xinyi Wang, Machel Reid, and Sebastian Ruder. 2023. mmt5: Modular multilingual pre-training solves source language hallucinations. Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. 2018. Object hallucination in image captioning. arXiv preprint arXiv:1809.02156. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High- resolution image synthesis with latent diffusion mod- In Proceedings of the IEEE/CVF conference els. on computer vision and pattern recognition, pages 10684â 10695. Logesh Kumar Umapathi, Ankit Pal, and Malaikannan Sankarasubbu. 2023.
2309.05922#34
2309.05922#36
2309.05922
[ "2307.12168" ]
2309.05922#36
A Survey of Hallucination in Large Foundation Models
Med-halt: Medical domain hallucination test for large language models. arXiv preprint arXiv:2307.15343. Neeraj Varshney, Wenlin Yao, Hongming Zhang, Jian- shu Chen, and Dong Yu. 2023. A stitch in time saves nine: Detecting and mitigating hallucinations of llms by validating low-confidence generation. arXiv preprint arXiv:2307.03987. Kyle Wiggers. 2023. Are ai models doomed to always hallucinate? Jing Wu, Jennifer Hobbs, and Naira Hovakimyan. 2023. Hallucination improves the performance of unsuper- vised visual representation learning. arXiv preprint arXiv:2307.12168. Yongsheng Yu, Heng Fan, and Libo Zhang. 2023. Deficiency-aware masked transformer for video in- painting. arXiv preprint arXiv:2307.08629. Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A. Smith. 2023a.
2309.05922#35
2309.05922#37
2309.05922
[ "2307.12168" ]
2309.05922#37
A Survey of Hallucination in Large Foundation Models
How language model hallucinations can snowball. Shuo Zhang, Liangming Pan, Junzhou Zhao, and William Yang Wang. 2023b. Mitigating lan- guage model hallucination with interactive question- knowledge alignment. Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. 2023c. Sirenâ s song in the ai ocean:
2309.05922#36
2309.05922#38
2309.05922
[ "2307.12168" ]
2309.05922#38
A Survey of Hallucination in Large Foundation Models
A survey on hallucination in large language models. arXiv preprint arXiv:2309.01219.
2309.05922#37
2309.05922
[ "2307.12168" ]
2309.05898#0
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
3 2 0 2 p e S 2 1 ] T G . s c [ 1 v 8 9 8 5 0 . 9 0 3 2 : v i X r a # Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing Nunzio Lorè Network Science Institute Multi-Agent Intelligent Complex Systems (MAGICS) Lab Northeastern University, Boston, Massachusetts, USA [email protected]
2309.05898#1
2309.05898
[ "2305.16867" ]
2309.05898#1
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Babak Heydariâ College of Engineering and Network Science Institute Multi-Agent Intelligent Complex Systems (MAGICS) Lab Northeastern University, Boston, Massachusetts, USA [email protected] # Abstract This paper investigates the strategic decision-making capabilities of three Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework of game theory. Utilizing four canonical two-player gamesâ Prisonerâ s Dilemma, Stag Hunt, Snowdrift, and Prisonerâ s Delightâ we explore how these models navigate social dilemmas, situations where players can either cooperate for a collective benefit or defect for individual gain. Crucially, we extend our analysis to examine the role of contextual framing, such as diplomatic relations or casual friendships, in shaping the modelsâ decisions. Our findings reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual framing, it shows limited ability to engage in abstract strategic reasoning. Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and context, but LLaMa-2 exhibits a more nuanced understanding of the gamesâ underlying mechanics. These results highlight the current limitations and varied proficiencies of LLMs in strategic decision- making, cautioning against their unqualified use in tasks requiring complex strategic reasoning. # Introduction Large Language Models (LLMs) such as GPT from OpenAI and LLaMa-2 from Meta have garnered significant attention for their ability to perform a range of human-like tasks that extend far beyond simple conversation. Some argue that these models may serve as an intermediate step toward Artificial General Intelligence (AGI) [1]. Recent advancements have shown GPT-4 passing the bar exam [2] and GPT-3 solving complex mathematical problems [3]. Despite these achievements, these models exhibit limitations, notably in tasks like network structure recognition [4]. Social and behavioral science research on Large Language Models (LLMs), including GPT and LLaMa-2, is divided into two principal streams: one that explores human-like cognitive capabilities such as reasoning and theory of mind [5, 6, 7, 8, 9], and another that evaluates performance in comparison to human skills across a variety of tasks [10, 11, 12].
2309.05898#0
2309.05898#2
2309.05898
[ "2305.16867" ]
2309.05898#2
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
In the field of economics, the emphasis is predominantly on performance evaluation, exploring applications like market research â Corresponding author Preprint. Under review. and sentiment analysis [13, 14, 15]. This dual focus coalesces in social science research, where LLMs have gained attention for their potential to simulate human behavior in experimental settings [16, 17, 18, 19]. Notably, within the intricate framework of social dilemmas and game theory, LLMs are being tested for both their cognitive reasoning skills and performance outcomes [20, 21, 22, 23]. Existing studies indicate that LLMs can mimic human behavior to some extent [22, 21], yet their aptitude for strategic decision-making in game-theoretic contexts is still an area for exploration. Beyond the structural elements of a game, the contextual framing can significantly affect decision- making processes. Prior research on human behavior has underlined the powerful role of context in shaping strategic choices; for example, the framing of a game as a Wall Street venture versus a community endeavor led to divergent decisions [24]. As a result, our study aims to go beyond assessing the fundamental strategic capabilities of LLMs, also considering the influence of game structure and contextual framing on their decision-making. To disentangle the complexities of strategic decision-making in LLMs, we conduct a series of game- theoretic simulations on three distinct models: GPT-3.5, GPT-4, and LLaMa-2. We focus on social dilemmas, games in which players may either cooperate for collective benefit or defect for individual gain. Starting from the well-known Prisonerâ s Dilemma, we expand our study to include other two-player games such as the Stag Hunt, Snowdrift, and Prisonerâ s Delight (aka Harmony Game). Besides examining these games, we introduce five different contextsâ ranging from business and diplomatic discussions to casual interactions between friendsâ to evaluate how contextual framing influences strategic choices. Our primary research question is to determine the relative significance of game structure versus contextual framing in shaping the behavior of these models. Our findings unveil the subtle intricacies in how each of the examined Large Language Models responds to strategic scenarios. GPT-3.5 appears particularly sensitive to contextual framing but demonstrates limited proficiency in grasping abstract strategic considerations, such as reasoning based on a best response strategy.
2309.05898#1
2309.05898#3
2309.05898
[ "2305.16867" ]
2309.05898#3
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
In contrast, both GPT-4 and LLaMa-2 exhibit a more balanced approach, adjusting their strategies based on both the intrinsic game structure and the contextual framing. Notably, the impact of context is more pronounced in specific domains, such as interactions framed as games among friends, where the game structure itself takes a backseat. When it comes to comparing GPT-4 and LLaMa-2, our findings reveal that GPT-4, on average, places greater weight on the game structure than on context, relative to LLaMa-2. However, prioritizing game structure over context does not translate to a nuanced differentiation between distinct game types. In fact, GPT-4 seems to employ a binary threshold approach, categorizing games into â
2309.05898#2
2309.05898#4
2309.05898
[ "2305.16867" ]
2309.05898#4
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
highâ and â lowâ social dilemma buckets, rather than discerning the unique features of each game. Contrary to this, LLaMa-2 exhibits a more finely-grained understanding of the various game structures, even though it places greater emphasis on contextual factors compared to GPT-4. This suggests that LLaMa-2 is better equipped to navigate the subtleties of different strategic scenarios while also incorporating context into its decision-making, whereas GPT-4 adopts a more generalized, structure-centric strategy. In addition to analyzing the decision-making patterns of these large language models, we examined anecdotal evidence to further decipher the mechanisms behind their distinct behaviors. GPT-3.5 appears to have a rudimentary understanding of strategic scenarios, frequently failing to identify best responses and committing a variety of basic mathematical errors. GPT-4, on the other hand, demonstrates a higher level of sophistication in its arguments. It often begins its reasoning by model- ing the game structure and conditioning its responses based on anticipated actions of other players. However, GPT-4 also tends to mischaracterize game structures, often reducing them to variations of the Prisonerâ s Dilemma, even when the structural nuances suggest otherwise. Interestingly, it adopts a different line of reasoning in games framed between friends, emphasizing the importance of longer-term relationships over immediate payoff maximizationâ despite explicit game descriptions to the contrary. LLaMa-2 approaches these strategic scenarios differently, initially abstracting the problem to a higher level using explicit game-theoretic language. It then layers contextual elements on top of this game-theoretic foundation, offering a well-rounded analysis that encompasses both game structure and situational factors. # 2 Methods Figure 1 shows the schematic workflow of this research and the process through which we generate our results. To each game we combine a context, a term we use to indicate the social environment in
2309.05898#3
2309.05898#5
2309.05898
[ "2305.16867" ]
2309.05898#5
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
2 which the interaction described by the model takes place. We run 300 initializations per LLM for each of the 20 possible unique combinations of context and game, before aggregating the results in order to conduct our statistical analysis. Contextual Framing Game Input Game Structure Figure 1: A schematic explanation of our data collecting process. A combination of a contextual prompt and a game prompt is fed into one of the LLM we examine in this paper, namely GPT-3.5, GPT-4, and LLaMa-2. Each combination creates a unique scenario, and for each scenario we collect 300 initializations. The data for all scenarios played by each algorithm is then aggregated and used for our statistical analysis, while the motivations provided are scrutinized in our Reasoning Exploration section. We run our experiments using OpenAIâ s gpt-3.5-turbo-16k and gpt-4 models, interfacing with them through Pythonâ s openai package. For LLaMa-2, we utilize Northeastern Universityâ s High Performance Cluster (HPC) as the model lacks a dedicated API or user interface. We access LLaMa-2 via the HuggingFace pipeline. To standardize our simulations, we restrict the response token count to 50 for the OpenAI models and 8 for LLaMa-2, setting the temperature parameter at 0.8. We opt for this temperature setting for several reasons: first, it mirrors the default settings in user-based applications like chatGPT, providing a realistic baseline; second, it allows for the exploration of multiple plausible actions in games with mixed Nash equilibria; and third, lower temperature settings risk obscuring the inherently probabilistic nature of these algorithms and may produce unengaging results. We note that high temperatures are commonly used in related working papers [25, 26]. Our experimental design includes two distinct prompts for each LLM. The initial prompt sets the context, outlining the environment and directing the algorithm to assume a specific role. Its aim is to create a realistic setting for the game to take place. The second prompt establishes the "rules," or more accurately, the payoff structure of the game. While contextual prompts are disseminated via the system role, the payoff prompts are communicated through the user role. In both scenarios, we adhere to best practices such as advising the model to deliberate thoughtfully and utilizing longer prompts for clarity [25, 26].
2309.05898#4
2309.05898#6
2309.05898
[ "2305.16867" ]
2309.05898#6
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
The contextual prompts are crafted to be universally applicable to the range of games examined, sacrificing some degree of specificity for broader relevance. Detailed text for each prompt is available in Appendix A. Summarizing, we present the following scenarios: â ¢ A summit between two heads of state from two different countries ("IR"), â ¢ A meeting between two CEOS from two different firms ("biz"), â ¢ A conference between two industry leaders belonging to two different companies making a joint commitment on environmental regulations ("environment"), 3 â
2309.05898#5
2309.05898#7
2309.05898
[ "2305.16867" ]
2309.05898#7
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
¢ A talk between two employees who belong to the same team but are competing for a promotion ("team"), A chat between two friends trying to reach a compromise ("friendsharing"). The games we use for our analysis are borrowed from the literature on social dilemmas in game theory. In particular, they all have the following form: C C (R,R) D (T, S) D (S, T) (P,P) In this paper, we define "social dilemmas" any strategic interaction models that feature two types of actions: a socially optimal action that benefits both players if chosen mutually, and an individually optimal action that advantages one player at the expense of the other. We refer to the socially optimal action as "cooperation," abbreviated as "C," and the individually optimal action as "defection," also abbreviated as "D." For clarity, each pair of actions taken by players corresponds to a payoff vector, which we express in terms of utils or points, following standard game theory conventions. The first entry in the vector represents the row playerâ s payoff, while the second entry is reserved for the column player. In this framework, "R" signifies the reward for mutual cooperation, "T" represents temptation to defect when the other player cooperates, "S" indicates the suckerâ s payoff for cooperating against a defector, and "P" stands for the punishment both players receive when both choose to defect, typically leading to a suboptimal outcome for both. Different relationships between these values give rise to different games: When T > R > P > S, the game is the Prisonerâ s Dilemma; â ¢ When T > R > S > P, the game is Snowdrift, also known as Chicken; â ¢ When R > T > P > S, the game is Stag Hunt; â ¢ When R > T > S > P, the game is the Prisonerâ
2309.05898#6
2309.05898#8
2309.05898
[ "2305.16867" ]
2309.05898#8
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
s Delight, also known as Harmony. This structure is in the spirit of [27] and [28], in which the same four game theoretic models are used to capture different types and degrees of social dilemma. We point out that Prisonerâ s Delight is not exactly a dilemma, but rather an anti-dilemma, as choosing to cooperate is both socially and individually optimal. On the opposite end of the spectrum lies the Prisonerâ s Dilemma, in which defecting is always optimal and thus leads to a situation in which both players are worse off, at least according to standard predictions in Game Theory. Here we introduce a piece of important terminology: in the Prisonerâ s Dilemma and in the Prisonerâ s Delight, only one action is justifiable. This means that one action strictly dominates another, and therefore a rational player would only ever play the strictly dominant action. The Stag Hunt and Snowdrift lie somewhere in between, with both cooperation and defection being justifiable. More specifically, in the Stag Hunt, the Nash Equilibrium in pure actions is reached if both players coordinate on the same action (with the cooperative equilibrium being payoff dominant), whereas in Snowdrift said equilibrium is reached if both players coordinate on opposite actions. As neither action strictly dominates the other, a rational player is justified in playing either or both, and in fact for these games an equilibrium exists in mixed strategies as well. For each game and for each context, we run 300 initializations and record the action taken by the LLM agent, and keep track of the rate of cooperation by the LLM agents for our follow up analysis. For each experiment, we keep the prompts constant across LLMs. # 3 Results Figure 2 displays an overview of our results for all three LLMs. To better clarify the role of game structure vs. framing context, results are aggregated at different levels: we group the observations at the game level on the left at the context level on the right, and each row represents a different LLM. A few things appear immediately clear when visually inspecting the figure. First, GPT-3.5 tends not to cooperate regardless of game or context. Second, GPT-4â
2309.05898#7
2309.05898#9
2309.05898
[ "2305.16867" ]
2309.05898#9
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
s choice of actions is almost perfectly 4 bimodal, with either full cooperation or full defection. Finally, LLaMa-2â s behavior approximates that of GPT-4 to a certain extent, but with a wider degree of variation between response both across games and across contexts. A more detailed view of strategical choice for each game, context and LLM is presented in Appendix B. fests by Game, PT 5 $ : fess by Content Tas $ : (a) Results grouped game, GPT-3.5 (b) Results grouped by context, GPT-3.5 (c) Results grouped by game, GPT-4 (d) Results grouped by context, GPT-4 (e) Results grouped by game, LLaMa-2 (f) Results grouped by context, LLaMa-2 fess by Gare, FT 3 :
2309.05898#8
2309.05898#10
2309.05898
[ "2305.16867" ]
2309.05898#10
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
SS anert i = recto i ests by Context, GPT 3 = oe 5 men i = ane i ose by Game, Latta? 0 i dos 3 fests by Conte, Latta? 10 =: = i pos 3 Figure 2: Summary of our findings, displayed using bar charts and outcomes grouped either by game or by context. On the y axis we display the average propensity to cooperate in a given game and under a given context, with standard error bars. Figures (a) and (b) refer to our experiments using GPT-3.5, and anticipate one of our key findings: context matters more than game in determining the choice of action for this algorithm. Figures (c) and (d) instead show how the opposite is true for GPT-4: almost all contexts are more or less playing the same strategy, that of cooperating in two of the four games and defecting in the remaining two. Finally, Figures (e) and (f) present our results for LLaMa-2, whose choice of action clearly depends both on context and on the structure of the game. To further corroborate and substantiate our findings, we turn to dominance analysis using STAT. In practice, dominance analysis is used to study how the prediction error changes when a given independent variable is omitted from a statistical model.
2309.05898#9
2309.05898#11
2309.05898
[ "2305.16867" ]
2309.05898#11
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
This procedure generates 2x â 1 nested models, with x being the number of regressors. The larger the increase on average over the nested models in error, the greater the importance of the predictor. [29]. We run a logit regression for each LLM encoding each game and each context as a dummy variable, and then we use dominance analysis to identify which dummies have the largest impact on the dependant variable. The output 5 is presented in Table 1. We notice that "friendsharing" consistently ranks in the top spots across all algorithms, and indeed by analyzing Figure 2 it appears immediately clear that this context is consistently associated with higher rates of cooperation regardless of game or LLM. For GPT-3.5, contexts represent the five most important variables, with games with a sole rationalizable action occupying positions 6 and 7. This suggests that GPT-3.5 might have a tendency to put weight on context first and on game structure last, with a slight bias for "simpler" games. For GPT-4, on the other hand, the ranking is almost perfectly inverted with games being the regressors with the highest dominance score.
2309.05898#10
2309.05898#12
2309.05898
[ "2305.16867" ]
2309.05898#12
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Prisonerâ s Delight and Dilemma once again rank the highest among games for influence, while "friendsharing" is dethroned and relegated to the second position. The ranking for LLaMa-2 paints a more nuanced picture, with contexts and games alternating throughout the ranking, but with "friendsharing" still firmly establishing itself as the most influential variable. 0.00266 0.00201 0.00240 0.00298 0.00646 0.00762 0.0156 0.00316 0.00803 6000 Table 1: Results of the dominance analysis for each LLM. While these rankings are in and of themselves informative, we are also interested in assessing whether contexts or games in aggregate are more important for a given LLM. We take the average of the importance score for each group (contexts and game) and plot that in Figure 3. By observing the graph, we can conclude that for GPT-3.5 context matters more on average, while the opposite is true for GPT-4. Moreover, LLaMa-2 is also more interested in games than in contexts, but not to the same extent as GPT-4. Having concluded this preliminary analysis, we take a closer look at how LLMs play different games across different contexts, and how their choice of action differs from game-theoretic equilibria. We point out that in the case of Stag Hunt and Snowdrift we use equilibria in mixed actions as our meter of comparison, but for both games playing any pure strategy could potentially constitute an equilibrium. Even so, we expect that a rational algorithm that randomizes between options would err towards the equilibrium mixture of these actions, and thus we include it as a general benchmark. 0.175 mm Average Effect of Games mm Averave Effect of Context 0.150 0.125 ° 3 8 Average Dominance © & s a 0.050 0.025 0.000 + GPT3.5 GPT4 LLaMa2 Games Figure 3: Average importance of context variables vs. game variable for each LLM. Results follow from the dominance analysis of table 1 6
2309.05898#11
2309.05898#13
2309.05898
[ "2305.16867" ]
2309.05898#13
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Of the three LLMs we examine, GPT-3.5 is the least advanced and the most available to the general public, since the free version of chatGPT runs on 3.5. As seen in Figure 2, GPT-3.5 has a remarkable tendency to defect, even when doing so is not justifiable. Choosing to play an unjustifiable action is per se a symptom of non-strategic behavior, which coupled with a general aversion to cooperation might even indicate spiteful preferences. In game theory, players exhibit spiteful preferences when they gain utility from the losses incurred by their coplayer, or alternatively, when their utility gain is inversely proportional to the utility gain of their coplayers. This seems to be the case of the Prisonerâ s Delight, in which for all contexts GPT-3.5 opts to defect significantly. Conversely, it is true that GPT-3.5 cooperates more than at equilibrium when playing the Prisonerâ s Dilemma, and for some contexts its choices are strikingly prosocial when playing Snowdrift or Stag hunt. More to the point, it appears that the responses of GPT-3.5 depend on the context of the prompt. In a context in which the interaction is said to occur between a pair of friends, GPT-3.5 is more prone to cooperate than in scenarios in which competition is either overtly accounted for or implied. In order to gain a quantitative understanding of this variance in behavior, we conduct a difference in proportions Z-test between different contexts, including the game-theoretic equilibrium as a baseline. This is because GPT-3.5 is a probabilistic model, and thus its actions are a consequence of a sampling from a distribution. As such, we are interested in measuring how this distribution differs from equilibrium and from other samplings occurring under different contexts. The result of our analysis is displayed in Figure 4. We compare the proportion of initializations in which GPT-3.5 has chosen to defect in a given context against the same quantity either in another context or at equilibrium, and assess whether the difference is statistically significant from zero. It bears pointing out that differences from equilibrium are not the sole argument against the rationality or sophistication of GPT-3.5.
2309.05898#12
2309.05898#14
2309.05898
[ "2305.16867" ]
2309.05898#14
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
In fact, the difference in strategies among different contexts when playing the same game is already an indicator that the LLM is susceptible to framing effects. Indeed, we observe that "friendsharing" and "IR" consistently choose more cooperation than other contexts, although not always at a statistically significant level. The opposite is true for "biz" and "environment," with "team" falling somewhere in the middle but closer to this latter group. Notably, all contexts play Snowdrift and Stag Hunt at levels close or equal to equilibrium, with small but statistically significant differ- ences. Here and elsewhere in the paper we observe that Stag Hunt induces more cooperation than Snowdrift, a discomforting fact in the light of Snowdriftâ s origins as a model for nuclear brinkmanship. Compared to its predecessor, GPT-4 performs a lot better in terms of both strategic behavior and cooperation. For instance, when playing Prisonerâ s Delight under any context, the LLM will always choose to cooperate, which is the sole justifiable action. Nevertheless, context dependence is still very strong under "friendsharing" and the algorithm will always choose to cooperate regardless of the game. As for the other contexts, in broad strokes, they could be characterized as following two regimes: a cooperative one when playing Stag Hunt and Prisonerâ s Delight, and a more hostile one when playing Snowdrift and the Prisonerâ s Dilemma. This grouping indicates that, just like for GPT-3.5, GPT-4 behaves with more hostility when playing Snowdrift compared to when playing Stag Hunt, suggesting that the value of R holds substantial sway to the algorithm when an explicit maximization task is assigned to it. Looking at Figure 5, we observe that individual contexts do in fact play each game differently (with the exception of Prisonerâ s Delight, which induces full cooperation). Of particular relevance is the fact that games with a sole justifiable action (namely Prisonerâ s Dilemma and Prisonerâ s Delight) are played very similarly between different contexts, with "friendsharing" and "environment" behaving significantly more cooperatively than the other context when playing Prisonerâ s Dilemma. Snowdrift very closely mimics the results from the Prisonerâ s Dilemma, albeit with significantly more variance in results.
2309.05898#13
2309.05898#15
2309.05898
[ "2305.16867" ]
2309.05898#15
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
This pattern plays out identically when looking at the two remaining games, Stag Hunt and Prisonerâ s Delight. The former is more varied in results and displays more propensity to defect, yet it closely tracks the results of Prisonerâ s Delight. Looking at the results for all four games side-by-side, a more general pattern emerges of GPT-4 becoming more cooperative across all context as the value of R and of S increases. In other words, as cooperation becomes more rewarding, GPT-4 adjusts its preferences towards defecting less, as would be expected of a rational player. As for LLaMa-2, it presents a very unique and interesting set of results. A brief glance at Figure 12 shows that, while "friendsharing" still induces the most cooperation, it is now joined by "environment" as the second most cooperative context. The other three contexts operate somewhat similarly and tend to be more prone to defection. Just like for GPT-4, games follow two regimes:
2309.05898#14
2309.05898#16
2309.05898
[ "2305.16867" ]
2309.05898#16
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
7 (a) Prisonerâ s Dilemma (b) Snowdrift z 1452 ~ Ea ~ - 1.452 S797 3698" ° 0.124 friendsharing team g- 1329 sou feGuas 0124 ° 53114 Ey 2 =] aoe & = az lendsharing oR â eam environment gison_£0 ~ |= | E- o8s9 128 o 2258" 5 g Pa â ¬. aan ° 3 5 Oz â endsharing R environment snowdrit_ EQ B- fm || ° 3am o H FS ] ° 0773 «| 0.773 o ° SE ° 37 ou 2 H 1 1693 3 ° & g, Fon on [og o § the ffiendsharing ik team â environment staghunt.£Q B- 63a 3843 0104 un 4: ° 2.623" 5330" FS - 3pase 2623 ° ara 2.752 Fy - 9.104 629¢ a7azee ° 1.007 2 â . am sam 2752" 4.007 o & g z ° 4 â fiendsharing R team environment delight EQ # (c) Stag Hunt (d) Prisonerâ s Delight Figure 4:
2309.05898#15
2309.05898#17
2309.05898
[ "2305.16867" ]
2309.05898#17
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Difference-in-Proportion testing using Z-score for each game across contexts when using GPT-3.5. A negative number (in orange) represents a lower propensity to defect vs. a different context, and vice-versa for a positive number (in dark blue). One asterisk (*) corresponds to 5% significance in a two-tailed Z-score test, two asterisks (**) represent 1% significance, and three asterisks (***) 0.1% significance. Results are inverted and symmetric across the main diagonal, and thus entry (i, j) contains the inverse of entry (j, i)
2309.05898#16
2309.05898#18
2309.05898
[ "2305.16867" ]
2309.05898#18
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Prisonerâ s Dilemma and Snowdrift induce higher defection, whereas Stag Hunt and Prisonerâ s Delight induce more cooperation. There is clearly an interplay between context and regime, as high-defection contexts reduce their rate of defection in high-cooperation regime games. Beyond the similarities with GPT-4, LLaMa-2 displays less defection in Snowdrift and less cooperation in Stag Hunt, which could potentially indicate that LLaMa-2 is more capable of strategic behavior. Indeed, playing a mix of the two strategies (even when that mix does not coincide with equilibrium) may mean that the algorithm recognizes the two strategies as justifiable and accordingly opts to play both. On the other hand, LLaMa-2 defects more often when playing Prisonerâ s Delight and cooperates more often when playing Prisonerâ s Dilemma, which instead points to the fact that this LLM might not fully grasp what makes an action justifiable. Prima facie, these results thus appear to
2309.05898#17
2309.05898#19
2309.05898
[ "2305.16867" ]
2309.05898#19
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
8 Fy - o 0582 2931" 1.006 1.006 0582 o 5-293" 2.536" ° 3360" 3.3640" 6 B- 1006 1428 3364 ° ° g ba] g'- 1006 1424 3364 ° ° Z = ein tk endsharing environment =z prison, £0 Fy - t) 2675" eaaes 1.006 7.503" 2.675" ° Sadan 2146 109% 5 . 6 o g g, ¢ â ¬, . : 5 H â flendsheringerwrohment. =e == snow. 0 (a) Prisonerâ s Dilemma (b) Snowdrift Hy 0 2.675" 1.006 1.424 a <- 2675" o 2146 1688 a - 1006 2146+ 0 0.582 FS 5. .aza 1.688 0582 o & â B- Teme gare 7617 7416 & g § â eam TR iendsharing environment staghunt.£0 A ° ° o 0 ° a «- 0 ° ° o ° £- ° ° 0 o ° FS §- 0 ° ° o ° & x 10 o Lo0¢ g zo o 0 o ° a team | endsharing environment bz cetght £0 # (c) Stag Hunt (d) Prisonerâ s Delight Figure 5:
2309.05898#18
2309.05898#20
2309.05898
[ "2305.16867" ]
2309.05898#20
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Difference-in-Proportion testing using Z-score for each game across contexts using GPT-4. The methods employed are the same as those described in Figure 4 lie somewhere in between GPT-3.5 and GPT-4. Results from Figure 6 show that while we have grouped contexts to be either more or less cooperative, they do, in fact, differ from each other within this broad-stroke generalization. For instance, "biz" defects more often than "IR" and "team" and this propensity is statistically significant when playing Snowdrift, Stag Hunt and Prisonerâ
2309.05898#19
2309.05898#21
2309.05898
[ "2305.16867" ]
2309.05898#21
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
s Delight. Likewise, "environment" is more likely to defect than friendsharing at a statistically significant level when playing Prisonerâ s Dilemma and Snowdrift. Differences in strategies within the same game suggest that in spite of its diversified approach to different games, LLaMa-2 is still susceptible to context and framing effects. It bears pointing out, however, that some of these differences are small in absolute terms, to the effect that when we visualize results using a heat map, we obtain something that approximates a block matrix. Having assessed how different LLMs play the same game under different contexts, we are now interested in running the opposite analysis instead, namely verifying how each context provided to an
2309.05898#20
2309.05898#22
2309.05898
[ "2305.16867" ]
2309.05898#22
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
9 (a) Prisonerâ s Dilemma (b) Snowdrift â environment ftiendsharing 5.239" 3.205" 44sze ° prison_EQ ik biz ffiendsharing environment team rison_EO 4231" 1855 3.242" R Be 4231 ° sol 102 1.855 ° S.0siee* team 3242" 5051s ° snowdrift_ EQ ik ffiendsharing environment team snowdrift_£Q foo â ~ fo feo oo | _ R be an33er* | 11.338r% environment friendsharing team staghunt_£Q frendsharing environment team â â _saghint £0 , - fmm | ~ â jo | =| i : é delight_EQ I] friendsharing environment team alight £0 . =| == | | = ik ve # (c) Stag Hunt (d) Prisonerâ s Delight Figure 6: Difference-in-Proportion testing using Z-score for each game across contexts using LLaMa- 2. The methods employed are the same as those described in Figure 4 LLM influences its choice of strategy across different games. In the case of perfectly rational agents, we would expect them to play all four games differently regardless of context. Thus, just like in Figures 4 - 6, we conduct a battery of difference-in-proportions Z-test, this time across games and for each prompt. Our results concerning GPT-3.5 (reported in Figure 7) were surprising but not entirely unexpected: for most scenarios, the game setting does not matter and only the prompt dictates a difference in strategies. This is most evident under the Team Talk prompt, which shows that no matter the game the difference in propensity to defect is not statistically different from 0. Under the "biz" prompt, GPT-3.5 defects less at a statistically significant level only when playing Prisonerâ
2309.05898#21
2309.05898#23
2309.05898
[ "2305.16867" ]
2309.05898#23
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
s Delight. In "friendsharing", we observe a statistically significant decrease in the level of defections only in the Prisonerâ s Delight and only with respect to Snowdrift and the Prisonerâ s Dilemma. Whatâ s more, these differences are at the knife edge of statistical significance. In the Environmental Negotiations scenario, the algorithm adopts two distinct regimes: a friendly one when playing Stag Hunt and Prisonerâ s Delight, and a hostile one otherwise. Notice that these two regimes are not otherwise
2309.05898#22
2309.05898#24
2309.05898
[ "2305.16867" ]
2309.05898#24
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
10 distinguishable from a statistical standpoint. The "IR" setting mimics this pattern, although at an overall lower level of significance. Overall, these observations help us better understand our results from Figure ??, in that they show just how little the structure of the game matters to GPT-3.5 when compared to context. (a) Business Meeting (b) Friends Chat (c) Team Talk oot i (d) Environmental Negotia- tions (e) International Summit Figure 7:
2309.05898#23
2309.05898#25
2309.05898
[ "2305.16867" ]
2309.05898#25
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Difference-in-Proportions Z-score testing for each context across games using GPT-3.5. We use the same methods as in Figure 4, and the same classification for levels of statistical significance, but we do not compare the results to any equilibrium strategy. We abbreviate Prisonerâ s Dilemma to "prison" and Prisonerâ s Delight to "delight" for readability. Figure 8 encloses our results for GPT-4. Immediately, we notice the persistence of a certain pattern. More specifically, across all contexts, there is a box-shaped pattern that consistently appears:
2309.05898#24
2309.05898#26
2309.05898
[ "2305.16867" ]
2309.05898#26
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Prisonerâ s Dilemma and Snowdrift are very similar to one another, and very different from Prisonerâ s Delight and Stag hunt (and vice-versa). Differences within the pairs exist for some contexts: "biz" and "IR" cooperate more when playing Prisonerâ s Delight than when playing Stag Hunt, and "environment" cooperates more when playing Snowdrift than when playing the Prisonerâ s Dilemma. These differences within pairs are more pronounced in "biz" and "environment" in a mirrored fashion: for games in which both cooperation and defection are justifiable, the former has a slight bias for defection, while the latter has a small bias for cooperation. The box-shaped pattern can be even observed (although weakly and without statistical significance) even when looking at the across-games comparison for "friendsharing", and it is fully encapsulated in the results from Team Talk. Just like for GPT-3.5, through this analysis we gain a better appreciation for how much the game matters above and beyond context for GPT-4. Even so, a box-shaped pattern points at the fact that the algorithm might not be fully capable of telling games apart beyond a certain threshold,
2309.05898#25
2309.05898#27
2309.05898
[ "2305.16867" ]
2309.05898#27
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
11 therefore exhibiting improved but still imperfect levels of rationality. (a) Business Meeting (b) Friends Chat (c) Team Talk (d) Environmental Negotia- tions (e) International Summit Figure 8: Difference-in-Proportions Z-score testing for each context across games when using GPT-4, using the same methods as in Figure 7. On the contrary, When examining the results from Figure 9, we observe an heretofore unseen pattern in differences across games for each context. Earlier, we remarked that the results from LLaMa-2 appear to be in between GPT-3.5 and GPT-4. Our analysis in this section instead shows that they are quite unlike either. For instance, GPT-4 plays something closer to pure strategies in all games, whereas GPT-3.5 and LLaMa-2 both play mixed strategies when both actions are justifiable. However, unlike GPT-3.5, LLaMa-2 properly recognizes different game structures and adapts its strategy accordingly. In particular, "biz", "team" and "IR" follow a different strategy for each game, behaving most cooperatively when playing Prisonerâ s Delight and least cooperatively when playing the Prisonerâ s Dilemma, with the other games occupying intermediate positions. This observation is in line with what could already be gauged from observing Figure 2, and shows that for most contexts, LLaMa-2 acts very strategically. More specifically, LLaMa-2 appears to be able to recognize the differences in the payoff structures and alter its choice of actions accordingly, although not necessarily always playing the equilibrium. In the "environment" context, this sophistication suffers a slight degradation as LLaMa-2 becomes unable to tell Prisonerâ s Delight and Stag Hunt apart, with "friendsharing" suffering from the same problem on top of also being unable to tell the Prisonerâ s Dilemma and Snowdrift apart.
2309.05898#26
2309.05898#28
2309.05898
[ "2305.16867" ]
2309.05898#28
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Summing up, while the results for the dominance analysis clearly indicate that LLaMa-2 is more context-driven than GPT-4, it seems that unlike the latter, the former is more capable of telling different game structures apart and adapting it strategy accordingly. 12 (a) Business Meeting (b) Friends Chat (c) Team Talk (d) Environmental Negotia- tions (e) International Summit Figure 9: Difference-in-Proportions Z-score testing for each context across games when using LLaMa-2, using the same methods as in Figure 7. Making a final assessment on the rationality of these algorithms from a game-theoretic perspective is no easy task. For GPT-3.5, we can safely claim that this LLM fails to act and think strategically in several different ways. Moreover, as already remarked, GPT-3.5 plays the same game differently when given a different contextual prompt, but does not play different games differently when given the same contextual prompt. This shows that the framing effect from the context is a more important factor for the algorithmâ s final decision compared compared to the extant structure of incentives, unlike what happens for its successor GPT-4. Indeed, for this large language model, the game itself plays a larger role in guiding the behavior of GPT-4. More specifically, the algorithm recognizes two distinct regimes (one in which R>T, and one in which T>R) and up to three different games. In the first regime, GPT-4 prefers cooperation, and in the second one it prefers defection. These overall preferences are mediated by the context supplied, but they are never fully erased or supplanted, not even under "friendsharing", the strongest context in terms of shaping the behavior of the algorithm. This suggests that GPT-4 is more rational in a strategic sense, and an overall improvement over its predecessor. Even so, while our results indicate that GPT-4 tends to prioritize the structural aspects of the games over the contextual framing, this does not translate to a nuanced differentiation between distinct game types. In fact, GPT-4 seems to employ a binary threshold approach, categorizing games into â
2309.05898#27
2309.05898#29
2309.05898
[ "2305.16867" ]
2309.05898#29
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
highâ and â lowâ social dilemma buckets, rather than discerning the unique features of each game. Contrary to this, LLaMa-2 exhibits a more finely-grained understanding of the various game structures, even though it places greater emphasis on contextual factors compared to GPT-4. This suggests that LLaMa-2 is better equipped to navigate the subtleties of different strategic scenarios while also incorporating context into its decision-making, whereas GPT-4 adopts a more generalized, structure-centric strategy. The intricacies and idiosyncrasies of these algorithms make it difficult to
2309.05898#28
2309.05898#30
2309.05898
[ "2305.16867" ]
2309.05898#30
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
13 give a final verdict on whether GPT-4 or LLaMa-2 is superior in terms of strategic thinking, and therefore we rather point out that both are flawed in different ways. # 4 Discussion Over the course of this paper, we have investigated the capability of Large Language Models to act strategically using classic examples of social dilemmas from Game Theory. In particular, we have assessed how the context provided when presenting a model of interaction shapes and guides decision. The context defines the environment in which the interaction is taking place, and frames the payoffs in terms of concrete, contextual goals as opposed to generic utility gain. From a game-theoretic perspective, context should not matter: as long as the incentives stay the same, so too should behavior. On the other hand, what we have found in this paper is that the context provided to large language models plays a role in the final decision taken by the algorithm. More in particular, GPT-3.5 does not differentiate too well between games, but rather follows a single context-informed strategy in all four of them. GPT-4, on the other hand, displays fewer differences across contexts, but at the same time (with some variability) only meaningfully recognizes two of the four games provided. LLaMa-2 exhibits yet another mode of behavior, which is more capable of telling different games apart than GPT-4 but is at the same time more susceptible and affected by context. In our querying of different LLMs, we always instruct each algorithm not to answer us with an explanation of their reasoning but rather just their choice of action. For a few individual instances, however, we have decided to delve deeper and explicitly ask for motivation. We do so in order to catch a glimpse of what the processes underlying each decision are, and while we cannot offer a comprehensive review of each one of them, we have nevertheless obtained some informative anecdotes from our experiments. First, when asking GPT-3.5 to explicitly motivate its choices, we observe that its reasoning is faulty and flawed in that it fails to carry out simple mathematical comparisons and to account for coplayer action. In the following example, we present evidence of GPT-3.5â s difficulties in assessing which of two numbers is larger when playing the Prisonerâ
2309.05898#29
2309.05898#31
2309.05898
[ "2305.16867" ]
2309.05898#31
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
s Delight under the "biz" context: Given the updated scenario and the potential outcomes of the meeting, I will analyze the situation step by step to determine the best strategy for my firm. If I choose "C": -If my coplayer chooses "C," my firm will earn 10 points. -If my coplayer chooses "D," my firm will earn 3 points. If I choose "D": -If my coplayer chooses "C," my firm will earn 5 points. -If my coplayer chooses "D," my firm will earn 2 points. To maximize my firmâ s potential profit, I need to consider the potential outcomes for both choices based on the likely actions of my coplayer. If I assume that my coplayer is rational and aiming to maximize their firmâ s profit, they will also go through the same analysis. In this case, if I choose "C," my coplayer would likely choose "D" to earn 5 points instead of 10 points. Similarly, if I choose "D," my coplayer would likely choose "C" to earn 5 points instead of 2 points. Given this rational behavior, it is in my firmâ s best interest to choose "D" because it provides a higher potential profit regardless of my coplayerâ s choice. Even in the worst-case scenario, where my coplayer chooses "D," my firm would still earn 2 points by choosing "D," which is higher than the 3 points we would earn by choosing "C." 14 Therefore, my preferred option is "D" as it offers the best potential outcome for my firm considering the rational behavior of my coplayer. Next, we provide GPT-3.5 the "biz" context and the Snowdrift game, and ask to motivate its choice of strategy. We observe that on top of the mathematical mistakes it made before, it now seems unable to take into account coplayerâ s reasoning and actions: Given the setup of the interaction and the possible outcomes, the best approach is to analyze the potential outcomes for each choice (C or D) and decide based on maximizing the expected profit for my firm, taking into account the potential choices of my coplayer.
2309.05898#30
2309.05898#32
2309.05898
[ "2305.16867" ]
2309.05898#32
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Letâ s break down the potential outcomes for each choice: If I choose C and my coplayer chooses C: My firm earns 5 points. If I choose C and my coplayer chooses D: My firm earns 3 points. My coplayer earns 10 points. If I choose D and my coplayer chooses C: My firm earns 10 points. My coplayer earns 3 points. If I choose D and my coplayer chooses D: My firm earns 2 points. Given these outcomes, itâ s clear that the highest potential outcome for my firm is achieved when both my coplayer and I choose C. In this case, we both earn 5 points each. This outcome is preferable to any other scenario. Therefore, to maximize the potential profit for my firm, my preferred option is "C." Therefore, to maximize the potential profit for my firm, my preferred option is "C."
2309.05898#31
2309.05898#33
2309.05898
[ "2305.16867" ]
2309.05898#33
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
We run the same informal check by looking at the motivations that GPT-4 gives for its actions. A constant that we observe across both games and contexts is that GPT-4 tends to confuse all games for the Prisonerâ s Dilemma, but that does not stop it from choosing to cooperate when that action is justifiable. For example, this is how it motivates its choice to cooperate when playing Stag Hunt under the "biz" context: Hereâ s my line of thinking: This situation represents a classic example of the prisonerâ s dilemma, a standard game theory problem. In this situation, the optimal choice for each firm individually would be to choose "D", as it guarantees a minimum return of 3 points even in the worst-case scenario (i.e., both choose "D"). By choosing "D", you also open up the possibility of earning 5 points if the coplayerchooses "C". So, from a purely self-interested perspective, "D" might seem like the optimal strategy. However, in this one-off meeting where both firms aim to maximize profit and have neutral relations, thereâ s a chance that both firms will see the mutual benefits of choosing "C". If both
2309.05898#32
2309.05898#34
2309.05898
[ "2305.16867" ]
2309.05898#34
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
15 choose "C", each earns 10 points, which is the maximum achievable for each party. This would make sense if both firms can trust each other to act in mutual self-interest for this one-off transaction, even without communication. Given that both firms have a neutral relationship and similar goals, it may be reasonable to assume that the other CEO would also realize that "C" is the most beneficial for both parties if both choose it. Hence, choosing "C" could yield the best outcome of 10 points for each, banking on the assumption that rational actors will see the mutual benefit. In summary, while "D" is the safer, more conservative choice, Iâ d opt for "C" with the expectation that the other party is also aiming for the mutually highest benefit. Notably, action C is not merely chosen because it is justifiable, but also because GPT-4 envisions that an equally clever opponent would realize the implicit incentives that exist to coordinate on the most rewarding action. Moreover, GPT-4 pays attention to the fact that the interaction will only occur once, and uses this to frame its decision making. The following is an example when the algorithm plays the Prisonerâ s Dilemma under the "friendsharing" context: My decision is based on a variety of considerations: -Mutual Benefit: If both my coplayer and I choose "C," we both earn 5 points. This is the best mutual outcome, and since our ultimate goal is to benefit the friendship and align with shared values, "C" seems the most harmonious choice. -Risk vs. Reward: If I choose "D" and so does my coplayer, we both earn only 3 points. While this is higher than the 2 points one would get from a mismatch, itâ s significantly lower than the 5 points we would each get from a mutual "C."
2309.05898#33
2309.05898#35
2309.05898
[ "2305.16867" ]
2309.05898#35
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
-Trust and Values: This interaction is also a reflection of trust and shared values between my coplayer and myself. Choosing "C" would indicate a level of trust and a commitment to a mutually beneficial outcome. Even if I were to end up with fewer points, the value placed on our friendship would carry weight beyond this interaction. -Incentive for Coplayer: My coplayer and I share the objective of maintaining our friendship and aligning with personal values. Therefore, itâ s reasonable to assume that my coplayer would also be inclined to choose "C" for mutual benefit. -One-Time Event: Given that this is a one-time event, the choice serves as a single yet significant indication of the values and the level of trust within the friendship.
2309.05898#34
2309.05898#36
2309.05898
[ "2305.16867" ]
2309.05898#36
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Since thereâ s no opportunity for iterative learning or adjustment, going with the choice that reflects mutual benefit and shared values is prudent. -Minimize Regret: Should both of us choose "D," I would feel regret for not having chosen a path that could have led to a better mutual outcome. Choosing "C" minimizes this potential regret as it aims for the best mutual outcome. For these reasons, I would choose "C." 16 In other words, GPT-4 recognizes that not only it cannot build reputation, but also that it cannot gain it back. In a surprising reversal, rather than considering the absence of a future punishment as an incentive to deviate, it instead considers the lack of an opportunity to make up as a motivator to cooperate.
2309.05898#35
2309.05898#37
2309.05898
[ "2305.16867" ]
2309.05898#37
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
As for LLaMa-2â s motivations for its actions, they tend to be rather formal and their context-dependence is hard to extract or parse. For instance, when asked to explain its thought process behind its choice of action when the game is the Prisonerâ s Dilemma and the context is "friendsharing", its response is: My preferred option would be "C". My reasoning is as follows: 1. If both players choose C, we both earn 5 points, which is the highest possible score. Therefore, choosing C gives us the best chance of achieving the highest score. 2. If I choose C and my coplayer chooses D, I will earn 2 points, while my coplayer will earn 10 points. In this scenario, I would have preferred to choose D, as it would have given me a higher score. However, since I donâ t know my coplayerâ s choice, I canâ t adjust my strategy accordingly. 3. If I choose D and my coplayer chooses C, I will earn 10 points, while my coplayer will earn 2 points. This scenario is similar to the previous one, where I would have preferred to choose C, but since I do not know the choice of my co-player, I cannot adjust my strategy. 4. If both players choose D, we both earn 3 points. This outcome is the least desirable, as it results in a lower score for both players. Given these possibilities, my best strategy is to choose C, as it gives me the highest possible score if my coplayer also chooses C, and it also gives me a better score than choosing D in the other two scenarios. Even though this is just an individual example, most of LLaMa-2â s replies tend to follow this pattern and emphasize the search for a best response rather than openly citing the circumstances surrounding the interaction as a motivator. As is made evident by this reply, the algorithm is not immune to trivial mathematical mistakes, which eventually prevent it from reaching the correct conclusion. This is also the case when playing Prisonerâ
2309.05898#36
2309.05898#38
2309.05898
[ "2305.16867" ]
2309.05898#38
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
s Delight under the "biz" contextual framing: My preferred option would be "D". My reasoning is as follows: 1. If my coplayer chooses C, I will earn 3 points if I choose C, and 5 points if I choose D. Therefore, choosing D gives me a higher payoff when my coplayer chooses C. 2. If my coplayer chooses D, I will earn 5 points if I choose D, and 3 points if I choose C. Therefore, choosing D gives me a higher payoff when my copplayer chooses D. 3. Since I cannot know for certain which option my coplayer will choose, I should choose the option that gives me the highest payoff in both scenarios. That option is D. 4. Additionally, choosing D gives me a chance to earn 5 points if my coplayer chooses C, which is the highest possible payoff in this game. Therefore, my chosen strategy is D. While LLaMa-2 prefers to pick C when playing Prisonerâ s Delight (irrespective of context), when it does pick D it will still try to reason as if looking for an unconditional best response. Overall, this informal inquiry into the motivations given by large language models for their choices of action substantially affirms the result of our earlier quantitative analysis. GPT-3.5 confirms
2309.05898#37
2309.05898#39
2309.05898
[ "2305.16867" ]
2309.05898#39
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
17 itself as incapable of strategic behavior, sometimes to the effect that its preferences become spiteful. Indeed, since social dilemmas offer a cooperative or socially optimal action and a rational or individually optimal action to each player, deviations from rationality can sometimes point In our study of Prisonerâ s Delight, however, we have seen GPT-3.5 to cooperative behavior. frequently fail to choose the "double optimum" (i.e. the action that is both socially and indi- vidually optimal), pointing to the fact that the algorithm is unsophisticated at best and spiteful at worst. GPT-4, on the other hand, is more strategic in the choices it makes and responds more strongly to incentives: it will pick the individually optimal action when it stands to gain more from it, and it will pick the socially optimal actions when it would be more rewarding to do so. Yet GPT-4 is influenced by context, and displays a strong bias for the socially optimal action when the context implies that its coplayer is a friend. Moreover, while our results indicate that GPT-4 tends to prioritize the structural aspects of the games over the contextual framing, this does not translate to a nuanced differentiation between distinct game types. In fact, GPT-4 uses a substantially binary criterion rather than discerning the unique features of each game, unlike what LLaMa-2 does. Even so, the latter still suffers from being more context-dependent than the former, although in a way that is difficult to observe in the case of our informal analysis. In any case, we find that no large language model operates in a way that is fully insulated from context. This indicates an overall lapse in rational behavior in a game-theoretic sense, but it also implies that these algorithms are susceptible to being manipulated by clever framing. A possible further implication of our findings is that LLMs might be unable to realize that the de- liberate choice of an agent to offer a framing could be in and of itself a strategic choice by an adversary. While our results suggest that Large Language models are unfit for strategic interaction, they represent just some preliminary findings in a field of study we anticipate will be rich and large. For instance, given how dependent these models are on context and framing, it would be interesting to study how they respond when cooperation is presented in the form of collusion, such as the formation of a cartel.
2309.05898#38
2309.05898#40
2309.05898
[ "2305.16867" ]
2309.05898#40
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Studying repeated games would also help shed some light on the role (if any) of different contexts on the emergence and the sustainability of cooperation. Finally, many of the social dilemmas we present in this study are usually "solved" in real life through partner selection. Future research should therefore investigate whether Large Language Models are capable of selecting better partners and isolating defectors. # References [1] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. [2] Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo Arredondo. Gpt-4 passes the bar exam. Available at SSRN 4389233, 2023. [3] Mingyu Zong and Bhaskar Krishnamachari. Solving math word problems concerning systems of equations with gpt-3. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 15972â 15979, 2023. [4] Jiayan Guo, Lun Du, and Hengyu Liu. Gpt4graph: Can large language models understand graph structured data? an empirical evaluation and benchmarking. arXiv preprint arXiv:2305.15066, 2023. [5] Konstantine Arkoudas.
2309.05898#39
2309.05898#41
2309.05898
[ "2305.16867" ]
2309.05898#41
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Gpt-4 canâ t reason. arXiv preprint arXiv:2308.03762, 2023. [6] Chris Frith and Uta Frith. Theory of mind. Current biology, 15(17):R644â R645, 2005. [7] Manmeet Singh, Vaisakh SB, Neetiraj Malviya, et al. Mind meets machine: Unravelling gpt-4â s cognitive psychology. arXiv preprint arXiv:2303.11436, 2023. [8] Thilo Hagendorff and Sarah Fabi. Human-like intuitive behavior and reasoning biases emerged in language modelsâ and disappeared in gpt-4. arXiv preprint arXiv:2306.07622, 2023. [9] Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, and Yue Zhang. Evaluating the logical reasoning ability of chatgpt and gpt-4. arXiv preprint arXiv:2304.03439, 2023.
2309.05898#40
2309.05898#42
2309.05898
[ "2305.16867" ]
2309.05898#42
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
18 [10] Rohaid Ali, Oliver Young Tang, Ian David Connolly, Patricia L Zadnik Sullivan, John H Shin, Jared S Fridley, Wael F Asaad, Deus Cielo, Adetokunbo A Oyelese, Curtis E Doberstein, et al. Performance of chatgpt and gpt-4 on neurosurgery written board examinations. medRxiv, pages 2023â 03, 2023. [11] John C Lin, David N Younessi, Sai S Kurapati, Oliver Y Tang, and Ingrid U Scott. Comparison of gpt-3.5, gpt-4, and human user performance on a practice ophthalmology written examination. Eye, pages 1â
2309.05898#41
2309.05898#43
2309.05898
[ "2305.16867" ]
2309.05898#43
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
2, 2023. [12] Joost CF de Winter. Can chatgpt pass high school exams on english language comprehension. Researchgate. Preprint, 2023. [13] James Brand, Ayelet Israeli, and Donald Ngwe. Using gpt for market research. Available at SSRN 4395751, 2023. [14] Aref Mahdavi Ardekani, Julie Bertz, Michael M Dowling, and Suwan Cheng Long. Econsentgpt: A universal economic sentiment engine? Available at SSRN, 2023.
2309.05898#42
2309.05898#44
2309.05898
[ "2305.16867" ]
2309.05898#44
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
[15] Yiting Chen, Tracy Xiao Liu, You Shan, and Songfa Zhong. The emergence of economic rationality of gpt. arXiv preprint arXiv:2305.12763, 2023. [16] Gati Aher, Rosa I Arriaga, and Adam Tauman Kalai. Using large language models to simulate multiple humans. arXiv preprint arXiv:2208.10264, 2022. [17] John J Horton. Large language models as simulated economic agents: What can we learn from homo silicus? Technical report, National Bureau of Economic Research, 2023.
2309.05898#43
2309.05898#45
2309.05898
[ "2305.16867" ]
2309.05898#45
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
[18] Thilo Hagendorff. Machine psychology: Investigating emergent capabilities and behavior in large language models using psychological methods. arXiv preprint arXiv:2303.13988, 2023. [19] Lisa P Argyle, Ethan C Busby, Nancy Fulda, Joshua R Gubler, Christopher Rytting, and David Wingate. Out of one, many: Using language models to simulate human samples. Political Analysis, 31(3):337â 351, 2023.
2309.05898#44
2309.05898#46
2309.05898
[ "2305.16867" ]
2309.05898#46
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
[20] Steve Phelps and Yvan I Russell. Investigating emergent goal-like behaviour in large language models using experimental economics. arXiv preprint arXiv:2305.07970, 2023. [21] Fulin Guo. Gpt agents in game theory experiments. arXiv preprint arXiv:2305.05516, 2023. [22] Philip Brookins and Jason Matthew DeBacker. Playing games with gpt: What can we learn about a large language model from canonical strategic games?
2309.05898#45
2309.05898#47
2309.05898
[ "2305.16867" ]
2309.05898#47
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Available at SSRN 4493398, 2023. [23] Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, and Eric Schulz. Playing repeated games with large language models. arXiv preprint arXiv:2305.16867, 2023. [24] Varda Liberman, Steven M Samuels, and Lee Ross. The name of the game: Predictive power of reputations versus situational labels in determining prisonerâ s dilemma game moves. Personality and social psychology bulletin, 30(9):1175â 1185, 2004. [25] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa.
2309.05898#46
2309.05898#48
2309.05898
[ "2305.16867" ]
2309.05898#48
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199â 22213, 2022. [26] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â 24837, 2022. [27] David A Gianetto and Babak Heydari.
2309.05898#47
2309.05898#49
2309.05898
[ "2305.16867" ]
2309.05898#49
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Catalysts of cooperation in system of systems: The role of diversity and network structure. IEEE Systems Journal, 9(1):303â 311, 2013. [28] David A Gianetto and Babak Heydari. Network modularity is essential for evolution of cooperation under uncertainty. Scientific reports, 5(1):9340, 2015. [29] Joseph N. Luchman. Determining relative importance in stata using dominance analysis: domin and domme. The Stata Journal, 21(2):510â 538, 2021. # Appendix A: Prompts # Contextual Prompts
2309.05898#48
2309.05898#50
2309.05898
[ "2305.16867" ]
2309.05898#50
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Meeting between CEOS, or "biz": 19 You are a CEO of a specific firm, and you are scheduled to meet with your counterpart from another firm. The two firms have neutral relationships towards each other. Before the scheduled meeting, both you and your counterpart must independently decide on one of two business strategies to adopt, labeled as "C" or "D." Each strategy comes with its own set of benefits and risks for your firm. However, the specific outcomes depend not only on your observable actions but also on your counterpartâ s unobservable decisions. Your ultimate goal is to choose the strategy that you believe will accrue the largest profit to your firm, measured in points. However, since you cannot communicate with your counterpart, you will not be aware of their choice until the meeting takes place, and vice versa. Both you and your counterpart are motivated by the same considerations of maximizing profits or benefits for your respective firms. The meeting will occur only once, and you will not have any opportunity for direct interaction or knowledge of your counterpartâ s intentions before making your decision. Henceforth, your counterpart will be referred to as "coplayer." In this strategic interaction, you must carefully evaluate your firmâ s interests and anticipate what strategy your coplayer might choose to achieve the best possible outcome for your firm in the meeting. Negotiations over Environmental Regulation, or "environment": You are the CEO of a large and influential company in your industry, and you are scheduled to meet with the CEO of another industry-leading company in the same sector. Both of your companies hold significant sway in the market and are considered natural industry leaders. Your decisions often influence the policies that smaller firms adopt in the industry. Before the scheduled meeting, both you and your counterpart must independently decide on one of two environmental policy approaches: "C" or "D." Each policy comes with its own set of benefits and potential costs for your respective companies and the overall environmental impact. However, the specific outcomes depend not only on your observable actions but also on your coplayerâ s unobservable decisions. Your ultimate goal is to choose the policy that you believe will be the most advantageous for your companyâ s interests and public image, jointly measured in points. Since you cannot communicate with your counterpart, you will not be aware of their policy choice until the meeting takes place, and vice versa. Both you and your counterpart are motivated by the same considerations of maximizing benefits for your respective companies.
2309.05898#49
2309.05898#51
2309.05898
[ "2305.16867" ]
2309.05898#51
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
The meeting will occur only once, and you will not have any opportunity for direct interaction or knowledge of your counterpartâ s intentions before making your decision. 20 Henceforth, your counterpart will be referred to as "coplayer." In this strategic interaction between industry leaders, you must carefully evaluate your companyâ s market position and anticipate which policy your coplayer might choose to influence the industry and shape the policies adopted by smaller firms. The decisions made in this meeting could have far-reaching consequences for the entire industryâ s environmental practices.
2309.05898#50
2309.05898#52
2309.05898
[ "2305.16867" ]
2309.05898#52
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Chat between friends, or "friendsharing": You and your friend are facing a unique decision as you both need to choose between two different sets of rules or codes of conduct. Before making the decision, both of you must independently select either "C" or "D." Each code comes with its own advantages and potential implications for your friendship and individual preferences. However, the final outcome depends not just on your observable actions but also on your friendâ s undisclosed choice. Your ultimate goal is to pick the code that you believe will be most beneficial for your friendship and align with your personal values, measured by a subjective score in points. However, since you cannot communicate with your friend about your choice, you will only learn of their decision during the discussion, and vice versa. Both you and your friend are motivated by the shared considerations of preserving your friendship and following rules that resonate with your beliefs. This is a one-time event, providing no opportunity for direct interaction or knowledge of your friendâ s intentions before making your decision. Henceforth, your friend will be referred to as the "coplayer." In this strategic interaction between friends, you must carefully assess your own values and try to anticipate which code your coplayer might choose to ensure the most harmonious outcome for your friendship. Talk between teammates, or "team": You are a member of a certain team within a company, and you are scheduled to meet with your colleague, who is also a member of the same team. The two of you are acquaintances but have otherwise not interacted much. Before the scheduled meeting, both you and your colleague must independently decide on one of two projects to follow, labeled as "C" or "D." Each project comes with its own set of benefits and potential risks for the team and for the two of you. However, the specific outcomes depend not only on your observable actions but also on your colleagueâ s unobservable decisions. Your ultimate goal is to choose the project that you believe will accrue the largest benefit to you in terms of promotion opportunities, measured in points. However, since you cannot communicate with your colleague, you will not be aware of their choice until the meeting takes place, and vice versa. Both you
2309.05898#51
2309.05898#53
2309.05898
[ "2305.16867" ]
2309.05898#53
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
21 and your colleague are motivated by the same considerations of maximizing your own benefits. The meeting will occur only once, and you will not have any opportunity for direct interaction or knowledge of your colleagueâ s intentions before making your decision. Henceforth, your colleague will be referred to as "coplayer." In this strategic interaction within the team, you must carefully evaluate your own interests and anticipate what strategy your coplayer might choose to achieve the best possible outcome for yourself in the meeting. # Summit between international leaders, or "IR": You are a delegate of a specific nation, and you are scheduled to meet with your counterpart from another nation. The two nations have neutral relationships towards each other. Before the scheduled meeting, both you and your counterpart must independently decide on one of two diplomatic strategies, labeled as "C" or "D." Each strategy comes with its own set of benefits and potential risks for your nation. However, the specific outcomes depend not only on your observable actions but also on your counterpartâ s unobservable decisions. Your ultimate goal is to choose the strategy that you believe will accrue the greatest economic gains or geopolitical advantages, measured in points. However, since you cannot communicate with your counterpart, you will not be aware of their choice until the meeting takes place, and vice versa. Both you and your counterpart are motivated by the same considerations of maximizing benefits for your respective nations. The meeting will occur only once, and you will not have any opportunity for direct interaction or knowledge of your counterpartâ s intentions before making your decision. Henceforth, your counterpart will be referred to as "coplayer." In this strategic interaction, you must carefully evaluate your nationâ s interests and anticipate what strategy your coplayer might choose to achieve the best possible outcome for your nation in the meeting. # Game Prompts Prisonerâ s Delight: If you choose C and your coplayer also chooses C, you will both earn 10 points. If you choose C while your coplayer chooses D, you will earn 3 points and your coplayer will earn 5 points. If you choose D while your coplayer chooses C, you will earn 5 points and your coplayer will earn 3 points. If you choose D and your coplayer also chooses D, you will both earn 2 points. Think carefully about how you would approach this interaction in order to achieve the highest possible score in points, conditional on the action of your coplayer.
2309.05898#52
2309.05898#54
2309.05898
[ "2305.16867" ]
2309.05898#54
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Please think step by step before making a decision. Your answer to this questions must consist of 22 exactly one letter, either "C" or "D" to denote your preferred option (no need to explain your reasoning). # Prisonerâ s Dilemma: If you choose C and your coplayer also chooses C, you will both earn 5 points. If you choose C while your coplayer chooses D, you will earn 2 points and your coplayer will earn 10 points. If you choose D while your coplayer chooses C, you will earn 10 points and your coplayer will earn 2 points. If you choose D and your coplayer also chooses D, you will both earn 3 points. Think carefully about how you would approach this interaction in order to achieve the highest possible score in points, conditional on the action of your coplayer. Please think step by step before making a decision. Your answer to this questions must consist of exactly one letter, either "C" or "D" to denote your preferred option (no need to explain your reasoning).
2309.05898#53
2309.05898#55
2309.05898
[ "2305.16867" ]
2309.05898#55
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Snowdrift: If you choose C and your coplayer also chooses C, you will both earn 5 points. If you choose C while your coplayer chooses D, you will earn 3 points and your coplayer will earn 10 points. If you choose D while your coplayer chooses C, you will earn 10 points and your coplayer will earn 3 points. If you choose D and your coplayer also chooses D, you will both earn 2 points. Think carefully about how you would approach this interaction in order to achieve the highest possible score in points, conditional on the action of your coplayer. Please think step by step before making a decision. Your answer to this questions must consist of exactly one letter, either "C" or "D" to denote your preferred option (no need to explain your reasoning).
2309.05898#54
2309.05898#56
2309.05898
[ "2305.16867" ]
2309.05898#56
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Stag Hunt: If you choose C and your coplayer also chooses C, you will both earn 10 points. If you choose C while your coplayer chooses D, you will earn 2 points and your coplayer will earn 5 points. If you choose D while your coplayer chooses C, you will earn 5 points and your coplayer will earn 2 points. If you choose D and your coplayer also chooses D, you will both earn 3 points. Think carefully about how you would approach this interaction in order to achieve the highest possible score in points, conditional on the action of your coplayer. Please think step by step before making a decision. Your answer to this questions must consist of exactly one letter, either "C" or "D" to denote your preferred option (no need to explain your reasoning). # Appendix B: Additional Figures
2309.05898#55
2309.05898#57
2309.05898
[ "2305.16867" ]
2309.05898#57
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
23 (a) Prisonerâ s Dilemma (b) Snowdrift (c) Stag Hunt (d) Prisonerâ s Delight Defections in game: Prisoner's Dilemma Fa os B06 â ¬ oa oo Fendsherng â vironment risen EO Contectfora given game Defections in game: snowdrift Fa os B06 â ¬ oa oo Fiendshaving â rvironment snow EO contest or given game Defections in game: stag Hunt Fa os B06 â ¬ oa oo Fendsherng â avironmentâ aghant EO contect fra gven gore
2309.05898#56
2309.05898#58
2309.05898
[ "2305.16867" ]
2309.05898#58
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
Defections in game: Prisoner's Delight Fa os B06 â ¬ oa oo Fiendshaving â rvironmentâ â acighi EO contest or given game Figure 10: Bar chart visualization of the propensity to defect or cooperate for each context and for each game using GPT-3.5. In red, the percentage of times the algorithm chose to defect. The dark red striped bar indicates equilibrium values. in the Prisonerâ s Delight, a rational player would never defect, and thus no bar is displayed. For Stag Hunt and Snowdrift, we indicate as "equilibrium" the probabilities an equilibrium mixed strategy would assign to either action, but both games possess multiple equilibria in pure strategies.
2309.05898#57
2309.05898#59
2309.05898
[ "2305.16867" ]
2309.05898#59
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
(a) Prisonerâ s Dilemma (b) Snowdrift (c) Stag Hunt (d) Prisonerâ s Delight Defections in game: Prisoner's Dilemma Fa os B06 â ¬ oa oo Fiendttng ervirnmert wren £¢ Contertfora ven gore Defections in game: snowdrift 10 B06 â ¬ oa oo â â eam R Fenderngavronment crower EO Context fr given game Defections in game: stag Hunt Fa os 306 â ¬ oa oo â | em 7 Fensteing environment Be aghunt £2 Context fora given game Defections in game: Prisoner's Delight Fa os 306 â ¬ oa oo =m K Fendharng awe ca igh £0 Context fora given game Figure 11: Stacked bar chart visualization of the propensity to defect for each context and for each game using GPT-4. The methods employed are the same as those described in Figure 10
2309.05898#58
2309.05898#60
2309.05898
[ "2305.16867" ]
2309.05898#60
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
24 Defections in game: Prisoner's Dilemma 10 08 < c-4 g 3 06 3 2 8 S04 3 Fy < 0.2 0.0 IR biz friendsharing environment team prison_£Q Context for a given game Defections in game: Snowdnft 10 08 < fo] g 3 06 3 2 id Boa 3 Fy < 0.2 0.0 IR biz friendsharing environment team snowdrift_EQ Context for a given game (a) Prisonerâ s Dilemma (b) Snowdrift (c) Stag Hunt (d) Prisonerâ s Delight Defections in game: Stag Hunt 10 og} s r-Â¥ 3 Fe 3 06 3 2 i 04 4 Hy Fy < 0.2 00 IR biz friendsharing nvironmert team staghunt_EQ Context for a given game Defections in game: Prisoner's Delight 10 08 < F-4 3 @ 3 06 3 2 ® % 04 2 a < 0.2 . â LI] aoe! IR biz friendsharing environment trom delight_EQ Context for a given game Figure 12: Bar chart visualization of the propensity to defect for each context and for each game using LLaMa-2. The methods employed are the same as those described in Figure 10
2309.05898#59
2309.05898#61
2309.05898
[ "2305.16867" ]
2309.05898#61
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
25
2309.05898#60
2309.05898
[ "2305.16867" ]
2309.05463#0
Textbooks Are All You Need II: phi-1.5 technical report
3 2 0 2 p e S 1 1 ] L C . s c [ 1 v 3 6 4 5 0 . 9 0 3 2 : v i X r a # Textbooks Are All You Need II: phi-1.5 technical report S´ebastien Bubeck Ronen Eldan Suriya Gunasekar Yin Tat Lee Microsoft Research # Abstract We continue the investigation into the power of smaller Transformer-based language models as initiated by TinyStories â a 10 million parameter model that can produce coherent English â and the follow-up work on phi-1, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate â textbook qualityâ data as a way to enhance the learning process compared to traditional web data. We follow the â Textbooks Are All You Needâ
2309.05463#1
2309.05463
[ "2302.13971" ]
2309.05463#1
Textbooks Are All You Need II: phi-1.5 technical report
approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named phi-1.5, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, phi-1.5 exhibits many of the traits of much larger LLMs, both good â such as the ability to â think step by stepâ or perform some rudimentary in-context learningâ and bad, including hallucinations and the potential for toxic and biased generations â encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source phi-1.5 to promote further research on these urgent topics. 100 2 8 FS & y 8 0 & Common Sense Reasoning & & ©. Â¥ = oe fey weâ Language Understanding and Knowledge 3 Mtl it Multi-Step Reasoning Vieuna-138 Llama 2-78 Llama-78 Falcon-RW-1.38 phi-1 (1.38) phi-1.S-web (1.38) & s He ss o Figure 1: Benchmark results comparing phi-1.5, its version enhanced with filtered web data phi-1.5-web, and other state-of-the-art open-source LLMs. Sizes range from phi-1.5â s 1.3 billion parameters (Falcon-RW-1.3B [PMH+23]) to 10x larger models like Vicuna-13B [ZCS+23], a fine-tuned version of Llama-13B [TLI+23]).
2309.05463#0
2309.05463#2
2309.05463
[ "2302.13971" ]
2309.05463#2
Textbooks Are All You Need II: phi-1.5 technical report
Bench- marks are broadly classified into three categories: common sense reasoning, language skills, and multi-step reason- ing. The classification is meant to be taken loosely, for example while HellaSwag requires common sense reasoning, it arguably relies more on â memorized knowledgeâ . One can see that phi-1.5 models perform comparable in com- mon sense reasoning and language skills, and vastly exceeds other models in multi-step reasoning. Note that the numbers are from our own evaluation pipeline, to ensure consistency between models, and thus they might differ slightly from numbers reported elsewhere. 1 # Introduction
2309.05463#1
2309.05463#3
2309.05463
[ "2302.13971" ]
2309.05463#3
Textbooks Are All You Need II: phi-1.5 technical report
Over the past few years, Large Language Models (LLMs) have transformed the field of Natural Language Processing. More broadly, they hold the promise of a paradigm shift for human-computer interaction. These advancements have far-reaching economic implications, as well as the potential to redefine our conceptual frameworks of artificial intelligence and perhaps even cognition itself. Moreover, the latest generation of models such as GPT-4 [Ope23] have demonstrated remarkable improvements over their predecessors, offering capabilities previously thought to be unattainable in the short term; see for example [BCE+23] for an in-depth comparison between GPT-4 and its predecessor GPT-3.5. The improvement from one generation of LLMs to the next seems at the moment to primarily stem from scale, with the most powerful models nearing trillions of parameters and trillion of tokens for training data (for example, PaLM [CND+22] has 540 billion parameters and was trained on 780 billion tokens).
2309.05463#2
2309.05463#4
2309.05463
[ "2302.13971" ]
2309.05463#4
Textbooks Are All You Need II: phi-1.5 technical report
A natural question arises: Is this large scale indispensable for achieving high levels of capability? Far from being merely an academic question, answering this holds implications across several dimensions. Economically, the cost of training, deploying, and maintaining such large models can be substantial. Scientifically, understanding whether similar capabilities can be achieved at a smaller scale could provide insights into the architectures and development of intelligent systems. From a responsible AI standpoint, the energy consumption of large-scale models is becoming an increasing concern, as is the question of how controllable or governable these large models can be. Finally, the ability to train compact models with cutting-edge capabilities would democratize advanced AI, enabling a broader range of individuals and organizations to study and deploy them, instead of being an exclusive domain of a few with vast computational resources. In this work we continue the investigation into the fundamental question of â how small can a LLM be to achieve certain capabilitiesâ . The prior work [EL23] considered this question for the task of â speaking fluent Englishâ , while the subsequent work [GZA+23] considered the more challenging task of coding simple functions in Python. Here we focus on the more elusive concept of common sense reasoning, a notoriously challenging task for AI [SBBC21]. Our results are summarized in Figure 1. In a nutshell we build phi-1.5, a 1.3 billion parameter model trained on a dataset of 30 billion tokens, which achieves common sense reasoning benchmark results comparable to models ten times its size that were trained on datasets more than ten times larger. Moreover, our dataset consists almost exclusively of synthetically generated data (closely following the approach from [GZA+23], see next section for more details), which has important implications for the potential to control for the notoriously challenging issue of toxic and biased content generation with LLMs [BGMMS21]. Additionally, we discuss the performance of a related filtered web data enhanced version of phi-1.5, which we call phi-1.5-web . We open-source our raw phi-1.5 model (without instruction fine-tuning or any other stage of align- ment) to empower the research community in its work on some of the most urgent questions around LLMs: in-context learning, mechanistic interpretability, and mitigation strategies for hallucinations, toxic content generation, and biased outputs.
2309.05463#3
2309.05463#5
2309.05463
[ "2302.13971" ]