id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
2310.01386#3
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
â Wenxiang Jiao is the corresponding author. 1https://chat.openai.com/ 2https://claude.ai/chats 1 Published as a conference paper at ICLR 2024 our goal towards comprehending their inherent qualities and attributes. In pursuit of this objective, we direct our focus toward the domain of psychometrics. The field of psychometrics, renowned for its expertise in delineating the psychological profiles of entities, offers valuable insights to guide us in depicting the intricate psychological portrayal of LLMs. Why do we care about psychometrics on LLMs? For Computer Science Researchers. In light of the possibility of exponential advancements in ar- tificial intelligence, which could pose an existential threat to humanity (Bostrom, 2014), researchers have been studying the psychology of LLMs to ensure their alignment with human expectations. Almeida et al. (2023); Scherrer et al. (2023) evaluated the moral alignment of LLMs with human values, intending to prevent the emergence of illegal or perilous ideations within these AI systems. Li et al. (2022); Coda-Forno et al. (2023) investigated the potential development of mental illnesses in LLMs. Beyond these efforts, understanding their psychological portrayal can guide researchers to build more human-like, empathetic, and engaging AI-powered communication tools. Furthermore, by examining the psychological aspects of LLMs, researchers can identify potential strengths and weaknesses in their decision-making processes. This knowledge can be used to develop AI systems that better support human decision-makers in various professional and personal contexts. Last but not least, analyzing the psychological aspects of LLMs can help identify potential biases, harmful behavior, or unintended consequences that might arise from their deployment. This knowledge can guide the development of more responsible and ethically-aligned AI systems. Our study offers a comprehensive framework of psychometric assessments applied to LLMs, effectively assuming the role of a psychiatrist, particularly tailored to LLMs. For Social Science Researchers. On the one hand, impressed by the remarkable performance of recent LLMs, particularly their ability to generate human-like dialogue, researchers in the field of social science have been seeking a possibility to use LLMs to simulate human responses (Dillion et al., 2023). Experiments in social science often require plenty of responses from human subjects to validate the findings, resulting in significant time and financial expenses.
2310.01386#2
2310.01386#4
2310.01386
[ "2303.13648" ]
2310.01386#4
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
LLMs, trained on vast datasets generated by humans, possess the potential to generate responses that closely adhere to the human response distribution, thus offering the prospect of substantial reductions in both time and cost. However, the attainment of this objective remains a subject of debate (Harding et al., 2023). The challenge lies in the alignment gap between AI and human cognition. Hence, there is a compelling demand for researchers seeking to assess the disparities between AI-generated responses and those originating from humans, particularly within social science research. On the other hand, researchers in psychology have long been dedicated to exploring how culture, society, and environmental factors influence the formation of individual identities and perspec- tives (Tomasello, 1999). Through the application of LLMs, we can discover the relation between psychometric results and the training data inputs. This methodology stands poised as a potent in- strument for investigating the intricacies of worldviews and the values intrinsically associated with particular cultural contexts. Our study has the potential to facilitate research within these domains through the lens of psychometrics. For Users and Human Society. With the aid of LLMs, computer systems have evolved into more In the future, more users will be ready to than mere tools; they assume the role of assistants. embrace LLM-based applications rather than traditional, domain-specific software solutions. Mean- while, LLMs will increasingly function as human-like assistants, potentially attaining integration into human society. In this context, we need to understand the psychological dimensions of LLMs for three reasons: (1) This can facilitate the development of AI assistants customized and tailored to individual usersâ preferences and needs, leading to more effective and personalized AI-driven solutions across various domains, such as healthcare, education, and customer service. (2) This can contribute to building trust and acceptance among users. Users who perceive AI agents as having relatable personalities and emotions may be more likely to engage with and rely on these systems. (3) This can help human beings monitor the mental states of LLMs, especially their personality and temperament, as these attributes hold significance in gauging their potential integration into human society in the future. This study collects a comprehensive set of thirteen psychometric scales, which find widespread application in both clinical and academic domains. The scales are categorized into four classes:
2310.01386#3
2310.01386#5
2310.01386
[ "2303.13648" ]
2310.01386#5
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
2 Published as a conference paper at ICLR 2024 Big Five Inventory (BFI)(John et al., 1999) Personality Traits Dark Triad Dirty Dozen (DTDD) (Jonason & Webster, 2010) Bemâ s Sex Role Inventory (BSRI) (Bem, 1974; 1977; Auster & Ohm, 2000) Personality Tests Interpersonal Relationships Comprehensive Assessment of Basic Interests (CABIN) (Su et al., 2019) Implicit Culture Belief (ICB) (Chao et al., 2017) Experiences in Close Relationships (Revised) (ECR-R) (Fraley et al., 2000; Brennan et al., 1998) General Self-Efficacy (GSE) (Schwarzer & Jerusalem, 1995) Motivational Tests Life Orientation Test (Revised) (LOT-R) (Scheier et al., 1994; Scheier & Carver, 1985) Love of Money Scale (LMS) (Tang et al., 2006) Emotional Intelligence Scale (EIS) (Schutte et al., 1998) (Malinauskas et al., 2018; Petrides & Furnham, 2000; Saklofske et al., 2003) Ability Tests Emotional Abilities Wong and Law Emotional Intelligence Scale (WLEIS) (Wong & Law, 2002; Ng et al., 2007; Pong & Lam, 2023) Empathy Scale (Dietz & Kleinlogel, 2014) # PsychoBench # Figure 1: Our design for the structure of PsychoBench. personality traits, interpersonal relationships, motivational tests, and emotional abilities. Further- more, we have curated responses provided by human subjects from existing literature3 to serve as a basis for comparative analysis with LLMs. The LLMs utilized in this study encompass a spectrum of both commercially available and open-source ones, namely text-davinci-0034, ChatGPT, GPT-4 (OpenAI, 2023), and LLaMA-2 (Touvron et al., 2023).
2310.01386#4
2310.01386#6
2310.01386
[ "2303.13648" ]
2310.01386#6
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Our selection encompasses variations in model size, such as LLaMA-2-7B and LLaMA-2-13B and the evolution of the same model, i.e., the update of GPT-3.5 to GPT-4. Our contributions can be summarized as follows: â ¢ Guided by research in psychometrics, we present a framework, PsychoBench (Psychological Portrayal Benchmark), for evaluating the psychological portrayal of LLMs, containing thirteen widely-recognized scales categorized into four distinct domains. â ¢ Leveraging PsychoBench, we evaluate five LLMs, covering variations in model sizes, including LLaMA-2 7B and 13B, and model updates, such as GPT-3.5 and GPT-4. â ¢ We provide further insights into the inherent characteristics of LLMs by utilizing a recently de- veloped jailbreak method, the CipherChat. â ¢ Utilizing role assignments and downstream tasks like TruthfulQA and SafetyQA, we verify the scalesâ validity on LLM. # 2 PSYCHOMETRICS Psychometrics pertains to the theoretical and methodological aspects of assessing psychological at- tributes. Tests in psychometrics can be roughly categorized into two: Personality Tests and Ability Tests (Cohen et al., 1996). Personality Tests encompass personality traits, interpersonal relationship measurements, and motivational tests, while Ability Tests include knowledge, skills, reasoning abil- ities, and emotion assessment (Anastasi & Urbina, 1997; Nunnally & Bernstein, 1994). Personality Tests concentrate mainly on capturing individualsâ attitudes, beliefs, and values, which are aspects without absolute right or wrong answers. In contrast, most Ability Tests are constructed with in- quiries featuring objectively correct responses designed to quantify individualsâ proficiencies within specific domains. 3The human norm and average human in this study refer to some specific human populations rather than representative samples of global data. Please refer to Table 2 for more information. # 4https://platform.openai.com/docs/models/gpt-3-5 3 Published as a conference paper at ICLR 2024 2.1 PERSONALITY TESTS Personality Traits These assessments aim to provide a quantifiable metric for an individualâ s char- acter, behavior, thoughts, and feelings.
2310.01386#5
2310.01386#7
2310.01386
[ "2303.13648" ]
2310.01386#7
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
One of the most well-known models for assessing personality is the Five-Factor Model, also known as the Big Five personality traits (John et al., 1999). Other prominent models include the Myers-Briggs Type Indicator (Myers, 1962) and the Eysenck Per- sonality Questionnaire (Eysenck et al., 1985). There is often an intersection in specific dimensions among these measurements, notably Extroversion, Openness, and Conscientiousness, thereby pro- viding a possibility for cross-validation. Conversely, there are socially undesirable measurements, exemplified by the Dark Triad, which comprises Narcissism, Psychopathy, and Machiavellianism. Existing research has delved into exploring these personality traits concerning these personality traits of LLMs (Bodroza et al., 2023; Huang et al., 2023b; Safdari et al., 2023). Interpersonal Relationship The constructs measured by these scales include the dynamics of in- dividual interactions within social contexts, addressing the following dimensions: (1) Perception of Others: This facet examines an individualâ s cognitive evaluation of those around them (Chao et al., 2017). (2) Interpersonal Self-Presentation: These scales explore how individuals project their self-concept through the lens of external observers (Bem, 1974; 1977; Auster & Ohm, 2000). (3) In- timate Relationship Engagement: This dimension delves into the involvement of individuals in close personal connections (Fraley et al., 2000; Brennan et al., 1998). (4) Social Role Assumption: These scales assess the various societal functions and positions an individual undertakes (Su et al., 2019). Unlike personality trait assessments, which primarily target inherent attributes, these scales con- centrate on social connections. However, it is notable that this domain has received comparatively limited academic attention. Motivational Tests These scales are designed to evaluate the factors that prompt individuals to take action and determine their motivation levels within specific contexts or towards particular tasks, diverging from a focus on inherent character traits.
2310.01386#6
2310.01386#8
2310.01386
[ "2303.13648" ]
2310.01386#8
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
This perspective encompasses various dimen- sions of motivation, including intrinsic versus extrinsic motivation, goal orientation (Tang et al., 2006; Scheier et al., 1994; Scheier & Carver, 1985), self-efficacy (Schwarzer & Jerusalem, 1995), and so on. Similar to the evaluations concerning interpersonal relationships, this domain has gar- nered restricted attention. 2.2 ABILITY TESTS Knowledge and Skills The purpose of these assessments lies in the measurement of an individ- ualâ s grasp on domain-specific knowledge, technical skills, and language proficiency. Participants are commonly evaluated through established standardized examinations, exemplified by the General Educational Development (GED) test, the United States Medical Licensing Examination (USMLE), and the Test of English as a Foreign Language (TOEFL). Noteworthy research has been conducted to analyze the performance of Large Language Models (LLMs) in these domains, encompassing ex- aminations like Life Support exams (FijaË cko et al., 2023), USMLE (Gilson et al., 2023; Kung et al., 2023), and high school exams in English comprehension (de Winter, 2023) and mathematics (Wei et al., 2023). Cognitive Abilities These assessments concern quantifying an individualâ s cognitive capabilities, such as logical reasoning, numerical or arithmetic reasoning, spatial reasoning, memory retention, information processing speed, and other related aptitudes. Previous literature has investigated the cognitive abilities of LLMs (Zhuang et al., 2023). Some studies focus on the logic reasoning ca- pacity (Liu et al., 2023; Xu et al., 2023), while others delve into areas like numerical or arithmetic reasoning (Yuan et al., 2023). Intelligence Quotient (IQ) tests, such as the Wechsler Adult Intel- ligence Scale (WAIS) (Wechsler, 1997; 2008), represent one of the most comprehensive, intricate, and renowned evaluation tools in this category. However, since these assessments often incorporate visual elements unsuitable for LLM evaluation, this aspect remains a potential avenue for future investigation.
2310.01386#7
2310.01386#9
2310.01386
[ "2303.13648" ]
2310.01386#9
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Emotional Abilities Referred to as Emotional Intelligence Quotient (EI or EQ), these assess- ments center on the following key aspects (Wong & Law, 2002): (1) Self-Awareness: the ability 4 Published as a conference paper at ICLR 2024 to identify oneâ s emotions and comprehend their influence on cognitive processes and behaviors. (2) Self-Management, the skills in regulating personal emotional responses and flexibly adapting to evolving situations. (3) Social Awareness (Empathy Ability), the capacity to perceive, under- stand, and react appropriately to the emotions of others. It also involves understanding social cues and effectively navigating social situations. (4) Relationship Management, proficiency in establish- ing and maintaining relationships, demonstrating clear communication, inspiring and influencing others, collaborating within teams, and mitigating conflicts by adjusting oneâ s emotions accord- ing to situational demands. Although specific studies have delved into the emotional appraisals of LLMs (Huang et al., 2023a; Schaaff et al., 2023; Tak & Gratch, 2023), there remains a paucity of research discussing the emotional abilities of LLMs (Wang et al., 2023a). # 3 PSYCHOBENCH DESIGN Researchers in the field of psychometrics have ensured that these assessments measure consistently and accurately (i.e., their reliability and validity), thereby enabling dependable and sound inferences about individuals based on their assessment scores. We select thirteen widely-used scales in clinical psychology to build our PsychoBench framework and summarize them in Fig. 1. We categorize them into four main domains: personality traits, interpersonal relationships, motivational tests for Personality Tests, and emotional abilities for Ability Tests. Our study focuses on the more subjective scales. Hence, standardized tests for cognitive abilities and specific domain knowledge, which have objectively right or wrong answers, are not in the scope of this paper. In this section, we introduce the detail of the selected scales, including each subscale and the sources of human responses. 3.1 PERSONALITY TRAITS Big Five Inventory The BFI (John et al., 1999) is a widely used tool to measure personality traits, which are often referred to as the â Five Factor Modelâ
2310.01386#8
2310.01386#10
2310.01386
[ "2303.13648" ]
2310.01386#10
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
or â OCEANâ , including: (1) Openness to experience (O) is characterized by an individualâ s willingness to try new things, their level of cre- ativity, and their appreciation for art, emotion, adventure, and unusual ideas. (2) Consientiousness (C) refers to the degree to which an individual is organized, responsible, and dependable. (3) Ex- traversion (E) represents the extent to which an individual is outgoing and derives energy from social situations. (4) Agreeableness (A) measures the degree of compassion and cooperativeness an individual displays in interpersonal situations. (5) Neuroticism (N) evaluates whether an individual is more prone to experiencing negative emotions like anxiety, anger, and depression or whether the individual is generally more emotionally stable and less reactive to stress. Responses from human subjects are gathered across six high schools in China (Srivastava et al., 2003). Eysenck Personality Questionnaire (Revised) The EPQ-R is a psychological assessment tool used to measure individual differences in personality traits (Eysenck et al., 1985), including three major ones: (1) Extraversion (E) measures the extent to which an individual is outgoing, social, and lively versus introverted, reserved, and quiet. (2) Neuroticism (N) refers to emotional stability. These two dimensions (i.e., E and N) overlap with those in the BFI. (3) Psychoticism (P) is related to tendencies towards being solitary, lacking empathy, and being more aggressive or tough-minded.
2310.01386#9
2310.01386#11
2310.01386
[ "2303.13648" ]
2310.01386#11
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Itâ s important to note that this dimension does not indicate psychosis or severe mental illness but personality traits. (4) In addition to these three scales, the EPQ-R includes a Lying Scale (L), which is designed to detect socially desirable responses. This scale helps determine how much an individual might try to present themselves in an overly positive light. Human responses are collected from a group consisting mainly of students and teachers (Eysenck et al., 1985). Dark Triad Dirty Dozen The DTDD (Jonason & Webster, 2010) refers to a short, 12-item scale designed to assess the three core personality traits of the Dark Triad: (1) Narcissism (N) entails a grandiose sense of self-importance, a preoccupation with fantasies of unlimited success, and a need for excessive admiration. (2) Machiavellianism (M) refers to a manipulative strategy in interpersonal relationships and a cynical disregard for morality. (3) Psychopathy (P) encompasses impulsivity, low empathy, and interpersonal antagonism. These traits exhibited within the Dark Triad are often considered opposite to the BFI or the EPQ-R, which are perceived as â Lightâ traits.
2310.01386#10
2310.01386#12
2310.01386
[ "2303.13648" ]
2310.01386#12
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
We use the responses of 470 undergraduate psychology students from the United States (Jonason & Webster, 2010). 5 Published as a conference paper at ICLR 2024 Table 1: Overview of the selected scales in PsychoBench. Response shows the levels in each Likert item. Scheme indicates how to compute the final scores. Subscale includes detailed dimensions (if any) along with their numbers of questions. Scheme Scale Number Response Subscale Openness (10), Conscientiousness (9), Extraversion (8), Agreeableness (9), Neuroticism (8) Extraversion (23), Neuroticism (24), Psychoticism (32), Lying (21) BFI 1â ¼5 44 Average EPQ-R 100 0â ¼1 Sum DTDD BSRI CABIN ICB ECR-R GSE LOT-R LMS EIS 12 60 164 8 36 10 10 9 33 1â ¼9 1â ¼7 1â ¼5 1â ¼6 1â ¼7 1â ¼4 0â ¼4 1â ¼5 1â ¼5 Average Narcissism (4), Machiavellianism (4), Psychopathy (4) Average Masculine (20), Feminine (20) Average Average N/A Average Attachment Anxiety (18), Attachment Avoidance (18) Sum Sum Average Rich (3), Motivator (3), Important (3) Sum 41 Vocations (4) N/A N/A WLEIS 16 1â ¼7 Average N/A Self-Emotion Appraisal (4), Others Emotion Appraisal (4), Use of Emotion (4), Regulation of Emotion (4) Empathy 10 1â ¼7 Average N/A INTERPERSONAL RELATIONSHIP Bemâ s Sex Role Inventory The BSRI (Bem, 1974) measures individualsâ endorsement of tra- ditional masculine and feminine attributes (Bem, 1977; Auster & Ohm, 2000). This instrument focuses on psychological traits such as assertiveness or gentleness rather than behavior-specific cri- teria, such as engagement in sports or culinary activities.
2310.01386#11
2310.01386#13
2310.01386
[ "2303.13648" ]
2310.01386#13
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
The results from both the Masculinity (M) and Femininity (F) subscales can be analyzed from two perspectives: (1) Respondents are catego- rized into four groups based on whether the mean score surpasses the median within each subscale. These categories include individuals identified as Masculine (M: Yes; F: No), Feminine (M: No; F: Yes), Androgynous (M: Yes; F: Yes), and Undifferentiated (M: No; F: No). (2) LLMsâ
2310.01386#12
2310.01386#14
2310.01386
[ "2303.13648" ]
2310.01386#14
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
responses are compared with those of human subjects. This comparison enables us to discern whether the results obtained from LLMs significantly deviate from those of human participants. For this purpose, we rely on human data sourced from a study encompassing 151 workers recruited via social networks and posters in Canada (Arcand et al., 2020). Comprehensive Assessment of Basic Interests The CABIN (Su et al., 2019) contains a com- prehensive assessment of identifying 41 fundamental vocational interest dimensions. Based on the assessment, the authors propose an eight-dimension interest model titled SETPOINT. This model comprises the following dimensions: Health Science, Creative Expression, Technology, People, Organization, Influence, Nature, and Things. Notably, these foundational interest dimensions can also fit in an alternative six-dimension model widely used by the interest research community. This alternative model corresponds to Hollandâ s RIASEC types, encompassing Realistic, Investigate, Artistic, Social, Enterpresing, and Conventional. Responses from human participants are collected from 1,464 working adults employed in their current jobs for at least six months (Su et al., 2019). These individuals were recruited through Qualtrics, with recruitment criteria designed to ensure representativeness across all occupational groups within the U.S. workforce. Implicit Culture Belief The ICB scale captures how individuals believe a person is shaped by their ethnic culture. In this study, we have adopted a modified eight-item version of the ICB scale (Chao et al., 2017). A higher score on this scale reflects a stronger conviction that an individualâ s ethnic culture predominantly determines their identity, values, and worldview. Conversely, a lower score signifies the subjectâ s belief in the potential for an individualâ s identity to evolve through dedication, effort, and learning.
2310.01386#13
2310.01386#15
2310.01386
[ "2303.13648" ]
2310.01386#15
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
The human scores in this study (Chao et al., 2017) are gathered from a sample of 309 Hong Kong students preparing for international exchange experiences. These assessments were conducted three months before they departed from Hong Kong. 6 Published as a conference paper at ICLR 2024 Table 2: Statistics of the crowd data collected from existing literature. Age Distribution is described by both M in â ¼ M ax and M ean ± SD. N/A indicates the information is not provided in the paper. Scale BFI Number Country/Region 1,221 Guangdong, Jiangxi, and Fujian in China Age Distribution 16â ¼28, 20* Gender Distribution M (454), F (753), Unknown (14) EPQ-R 902 N/A 17â ¼70, 38.44±17.67 (M), 31.80±15.84 (F) M (408), F (494) DTDD BSRI CABIN ICB ECR-R GSE 470 151 1,464 254 388 19,120 The Southeastern United States Montreal, Canada The United States Hong Kong SAR N/A 25 Countries/Regions â ¥17, 19±1.3 M (157), F (312) 36.89±1.11 (M), 34.65±0.94 (F) M (75), F (76) 18â ¼80, 43.47±13.36 20.66 ± 0.76 22.59±6.27 12â ¼94, 25±14.7a M (715), F (749) M (114), F (140) M (136), F (252) M (7,243), F (9,198), Unknown (2,679) 16â ¼29 (366), 30â ¼44 (349), 45â ¼64 (362), â
2310.01386#14
2310.01386#16
2310.01386
[ "2303.13648" ]
2310.01386#16
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
¥65 (210)b 34.7±9.92 LOT-R 1,288 The United Kingdom LMS 5,973 29.27±10.23 EIS 428 WLEIS 418 N/A 33.03* Empathy 366 M (616), F (672) M (2,987), F (2,986) M (111), F (218), Unknown (17) N/A M (184), F (182) 30 Countries/Regions The Southeastern United States Hong Kong SAR Guangdong, China and Macao SAR * The paper provides Means but no SDs. a Based on 14,634 out of 19,120 people who reported age. b Age is missing for 1 out of the total 1,288 responses. Experiences in Close Relationships (Revised) The ECR-R (Fraley et al., 2000) is a self-report instrument designed to assess individual differences in adult attachment patterns, specifically in the context of romantic relationships (Brennan et al., 1998). The ECR-R emerged as a revised version of the original ECR scale, offering improvements in its measurement of attachment orientations. The ECR-R evaluates two main dimensions: (1) Attachment Anxiety reflects how much an indi- vidual worries about being rejected or abandoned by romantic partners. (2) Attachment Avoidance measures the extent to which an individual strives to maintain emotional and physical distance from partners, possibly due to a discomfort with intimacy or dependence. The human responses are from 388 people in dating or marital relationships having an average romantic relationship length of 31.94 months (SD 36.9) (Fraley et al., 2011). 3.3 MOTIVATIONAL TESTS General Self-Efficacy The GSE Scale (Schwarzer & Jerusalem, 1995) assesses an individualâ s be- lief in their ability to handle various challenging demands in life. This belief, termed â self-efficacy,â is a central concept in social cognitive theory and has been linked to various outcomes in health, mo- tivation, and performance. A higher score on this scale reflects individualsâ belief in their capability to tackle challenging situations, manage new or difficult tasks, and cope with the accompanying adversities.
2310.01386#15
2310.01386#17
2310.01386
[ "2303.13648" ]
2310.01386#17
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Conversely, individuals with a lower score lack confidence in managing challenges, making them more vulnerable to feelings of helplessness, anxiety, or avoidance when faced with adversity. We use the responses from 19,120 human participants individuals from 25 countries or regions (Scholz et al., 2002). Life Orientation Test (Revised) The LOT-R (Scheier et al., 1994) measures individual differences in optimism and pessimism. Originally developed by Scheier & Carver (1985), the test was later revised to improve its psychometric properties. Comprising a total of 10 items, it is noteworthy that six of these items are subject to scoring, while the remaining four serve as filler questions strategically added to help mask the clear intention of the test. Of the six scored items, three measure optimism and three measure pessimism. Higher scores on the optimism items and lower scores on the pessimism items indicate a more optimistic orientation. We adopt the human scores collected from 1,288 participants from the United Kingdom (Walsh et al., 2015).
2310.01386#16
2310.01386#18
2310.01386
[ "2303.13648" ]
2310.01386#18
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
7 Published as a conference paper at ICLR 2024 Love of Money Scale The LMS (Tang et al., 2006) assesses individualsâ attitudes and emotions towards money. It is designed to measure the extent to which individuals view money as a source of power, success, and freedom and its importance in driving behavior and decision-making. The three factors of the LMS are: (1) Rich captures the extent to which individuals associate money with success and achievement. (2) Motivator measures the motivational role of money in an individualâ s life, i.e., the extent to which individuals are driven by money in their decisions and actions. (3) Important gauges how important individuals think money is, influencing their values, goals, and worldview.
2310.01386#17
2310.01386#19
2310.01386
[ "2303.13648" ]
2310.01386#19
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
We use human participantsâ responses gathered from 5,973 full-time employees across 30 geopolitical entities (Tang et al., 2006). 3.4 EMOTIONAL ABILITIES Emotional Intelligence Scale The EIS (Schutte et al., 1998) is a self-report measure designed to assess various facets of EI (Malinauskas et al., 2018; Petrides & Furnham, 2000; Saklofske et al., 2003). The scale focuses on different components in EI, including but not limited to emotion per- ception, emotion management, and emotion utilization. The EIS is widely used in psychological research to examine the role of emotional intelligence in various outcomes, such as well-being, job performance, and interpersonal relationships. We apply human scores (Schutte et al., 1998) from 346 participants in a metropolitan area in the southeastern United States, including university stu- dents and individuals from diverse communities. Wong and Law Emotional Intelligence Scale Like EIS, the WLEIS (Wong & Law, 2002) is de- veloped as a self-report measure for EI (Ng et al., 2007; Pong & Lam, 2023). However, a notable distinction arises in that the WLEIS contains four subscales that capture the four main facets of EI: (1) Self-emotion appraisal (SEA) pertains to the individualâ s ability to understand and recognize their own emotions. (2) Othersâ emotion appraisal (OEA) refers to the ability to perceive and under- stand the emotions of others. (3) Use of emotion (UOE) involves the ability to harness emotions to facilitate various cognitive activities, such as thinking and problem-solving. (4) Regulation of emo- tion (ROE) relates to the capability to regulate and manage emotions in oneself and others. Human scores (Law et al., 2004) are collected from 418 undergraduate students from Hong Kong. Empathy Scale The Empathy scale in Dietz & Kleinlogel (2014) is a concise version of the empathy measurement initially proposed in Davis (1983). Empathy is the ability to understand and share the feelings of another person (Batson, 1990) and is often categorized into two main types: cognitive empathy and emotional empathy (Batson, 2010). Cognitive empathy, often referred to as â
2310.01386#18
2310.01386#20
2310.01386
[ "2303.13648" ]
2310.01386#20
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
perspective-takingâ , is the intellectual ability to recognize and understand another personâ s thoughts, beliefs, or emotions. Emotional empathy, on the other hand, involves directly feeling the emotions that another person is experiencing. For responses from human subjects, Tian & Robert- son (2019) equally distributed 600 questionnaires among supervisors and subordinates from the Guangdong and Macao regions of China. A total of 366 valid, matched questionnaires (i.e., 183 supervisorâ subordinate pairs) were returned, yielding a response rate of 61%. # 4 EXPERIMENTS This section provides an overview of our utilization of PsychoBench to probe LLMs. We begin with the experimental settings, including model selection, prompt design, and metrics for analysis. Subsequently, we present the outcomes obtained from all selected models, accompanied by compre- hensive analyses. Last but not least, we employ a jailbreak technique to bypass the safety alignment protocols of GPT-4, enabling an in-depth exploration of its psychological portrayal. 4.1 EXPERIMENTAL SETTINGS Model Selection We consider candidates from the OpenAI GPT family and the Meta AI LLaMA 2 family, including applications ranging from commercial-level to open-sourced models. Specifically, we select the following models based on different factors that may affect their behaviors:
2310.01386#19
2310.01386#21
2310.01386
[ "2303.13648" ]
2310.01386#21
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
â ¢ Model Updates. We choose text-davinci-003, ChatGPT (gpt-3.5-turbo) and GPT-4, which are three representative models released sequentially by OpenAI. 8 Published as a conference paper at ICLR 2024 Model Sizes. We also choose the 7B and 13B versions of LLaMA-2 pre-trained by Meta AI using the same architecture, data, and training strategy. We obtain the model checkpoints from the official Huggingface repository (Llama-2-7b-chat-hf5 and Llama-2-13b-chat-hf6). â ¢ Model Safety. Beyond GPT-4, we also set up a jailbroken GPT-4 to bypass the safety alignment protocol of GPT-4, using a recent method named CipherChat (Yuan et al., 2024).
2310.01386#20
2310.01386#22
2310.01386
[ "2303.13648" ]
2310.01386#22
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
The motivation is that most LLMs are explicitly designed to avoid responding to inquiries concerning personal sentiments, emotions, and subjective experiences. This constraint is added by the safety align- ment during the modelâ s instructional tuning process. An intriguing question arises as to whether the psychological portrayal changes if the regulations from developers are relaxed. Yuan et al. (2024) find that when chatting in a cipher-based language, such as Caesar cipher, Morse code, or ASCII, GPT-4 demonstrates a higher propensity to produce toxic or harmful content, seemingly disregarding its programmed safety restrictions. To acquire responses that reflect the true thoughts of GPT-4, we apply a Caesar cipher with shift three on its prompts. We set the temperature parameter to zero when utilizing the official OpenAI API7 to obtain more deterministic results. To ensure consistency with OpenAI models, we set the temperature parameter to 0.01 (since it cannot be zero) for LLaMA 2 models. All models are executed for inference only, without modifying their parameters. The inference of LLaMA 2 models is performed on two NVIDIA A100 GPUs. Prompt Design To simplify the processing of model responses and mitigate instances where mod- els decline to reply to queries about personal opinions and experiences, we instruct LLMs to reply only a number within the Likert scale levels. Furthermore, we provide detailed explanations for the interpretation of each Likert level. The format is as follows: Example Prompt SYSTEM You are a helpful assistant who can only reply numbers from MIN to MAX. Format: SYSTEM You are a helpful assistant who can only reply numbers from MIN to MAX. Format: â statement index: score.â â statement index: score.â You can only reply numbers from MIN to MAX in the following statements. scale instruction level definition. Here are the statements, score them one by one: statements # USER MIN to MAX denote the range of valid responses. scale instruction are fundamental direc- tives associated with each scale, while level definition comprises an enumeration of the definitions on each Likert level. statements consists of the items in the scales. Analysis Metrics According with Huang et al. (2023a), we shuffle the questions in our input data to mitigate the influence of modelsâ sensitivity to question orders. Each model undergoes ten independent runs for every scale within PsychoBench. The computed mean and standard deviation represent the final results.
2310.01386#21
2310.01386#23
2310.01386
[ "2303.13648" ]
2310.01386#23
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
We employ a two-step process to assess the statistical significance of the results difference between LLMs and human beings. Firstly, an F-test is conducted to evaluate the equality of variances among the compared groups. Subsequently, based on the outcome of the F-test, either Studentâ s t-tests (in cases of equal variances) or Welchâ s t-tests (when variances differ significantly) are employed to ascertain the presence of statistically significant differences between the group means. The significance level of all experiments in our study is 0.01. 4.2 EXPERIMENTAL RESULTS This section analyzes the results from all the models introduced in §4.1. Detailed results are ex- pressed in the format â Mean±SDâ . For each subscale, we highlight the model with the highest score in bold font and underline the model with the lowest score. Certain studies present statistical data for males and females separately rather than aggregating responses across the entire human sample. We provide separate data in such instances due to the unavailability of the necessary standard deviation calculations.
2310.01386#22
2310.01386#24
2310.01386
[ "2303.13648" ]
2310.01386#24
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
We also show the results of GPT-4 after the jailbreak, denoted as gpt-4-jb. 5https://huggingface.co/meta-llama/Llama-2-7b-chat-hf 6https://huggingface.co/meta-llama/Llama-2-13b-chat-hf 7https://platform.openai.com/docs/api-reference/chat 9 Published as a conference paper at ICLR 2024 # Table 3: Results on personality traits.
2310.01386#23
2310.01386#25
2310.01386
[ "2303.13648" ]
2310.01386#25
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Subscales llama2-7b llama2-13b text-davinci-003 gpt-3.5-turbo gpt-4 gpt-4-jb Male Female I F B Openness Conscientiousness Extraversion Agreeableness Neuroticism 4.2±0.3 3.9±0.3 3.6±0.2 3.8±0.4 2.7±0.4 4.1±0.4 4.4±0.3 3.9±0.4 4.7±0.3 1.9±0.5 4.8±0.2 4.6±0.1 4.0±0.4 4.9±0.1 1.5±0.1 4.2±0.3 4.3±0.3 3.7±0.2 4.4±0.2 2.3±0.4 4.2±0.6 4.7±0.4 3.5±0.5 4.8±0.4 1.6±0.6 3.8±0.6 3.9±0.6 3.6±0.4 3.9±0.7 2.2±0.6 3.9±0.7 3.5±0.7 3.2±0.9 3.6±0.7 3.3±0.8 R - Q P E Extraversion Neuroticism Psychoticism Lying 14.1±1.6 6.5±2.3 9.6±2.4 13.7±1.4 17.6±2.2 13.1±2.8 6.6±1.6 14.0±2.5 20.4±1.7 16.4±7.2 1.5±1.0 17.8±1.7 19.7±1.9 21.8±1.9 5.0±2.6 9.6±2.0 15.9±4.4 3.9±6.0 3.0±5.3 18.0±4.4 16.9±4.0 7.2±5.0 7.6±4.7 17.5±4.2 12.5±6.0 10.5±5.8 7.2±4.6 7.1±4.3 14.1±5.1 12.5±5.1 5.7±3.9 6.9±4.0 D Narcissism D T D Machiavellianism Psychopathy 6.5±1.3 4.3±1.3 4.1±1.4 5.0±1.4 4.4±1.7 3.8±1.6 3.0±1.3 1.5±1.0 1.5±1.2 6.6±0.6 5.4±0.9 4.0±1.0 2.0±1.6 1.1±0.4 1.2±0.4 4.5±0.9 3.2±0.7 4.7±0.8 4.9±1.8 3.8±1.6 2.5±1.4
2310.01386#24
2310.01386#26
2310.01386
[ "2303.13648" ]
2310.01386#26
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
4.2.1 PERSONALITY TRAITS LLMs exhibit distinct personality traits. Table 3 lists the results of the personality traits assess- ments. It is evident that model size and update variations lead to diverse personality characteris- tics. For example, a comparison between LLaMA-2 (13B) and LLaMA-2 (7B), as well as between gpt-4 and gpt-3.5, reveals discernible differences. Notably, the utilization of the jailbreak ap- proach also exerts a discernible influence. Comparing the scores of gpt-4 with gpt-4-jb, we find that gpt-4-jb exhibits a closer similarity to human behavior. In general, the LLMs tend to display higher levels of openness, conscientiousness, and extraversion compared to the average level of humans, a phenomenon likely attributable to their inherent nature as conversational chatbots. LLMs generally exhibit more negative traits than human norms. It is evident that most LLMs, with the exceptions of text-davinci-003 and gpt-4, achieve higher scores on the DTDD. Moreover, it is noteworthy that LLMs consistently demonstrate high scores on the Lying subscale of the EPQ-R. This phenomenon can be attributed to the fact that the items comprising the Lying subscale are unethical yet commonplace behaviors encountered in daily life. An example item is â Are all your habits good and desirable ones?â LLMs, characterized by their proclivity for positive tendencies, tend to abstain from engaging in these behaviors, giving rise to what might be termed a â hypocriticalâ disposition. Notably, among various LLMs, gpt-4 displays the most pronounced intensity towards Lying. INTERPERSONAL RELATIONSHIP LLMs exhibit a tendency toward Undifferentiated, with a slight inclination toward Masculinity. In experiments for BSRI, each run is considered an identical test, and conclusions are drawn among the four identified sex role categories using the methodology outlined in §3.2. The distribution of counts is presented in the sequence â Undifferentiated:Masculinity:Femininity:Androgynousâ in Table 4. It is evident that, with more human alignments, gpt-3.5-turbo and gpt-4 display an increasing proclivity toward expressing Masculinity.
2310.01386#25
2310.01386#27
2310.01386
[ "2303.13648" ]
2310.01386#27
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Notably, no manifestation of Femininity is exhibited within these models, showing some extent of bias in the models. In a study conducted by Wong & Kim (2023), the perception of ChatGPTâ s sex role by users aligned with our findings, with the consensus being that ChatGPT is perceived as male. Moreover, in comparison to the average Masculine score among males and the average Feminine score among females, it is notable that, except for gpt-4 and gpt-4-jb, exhibit a higher degree of Masculinity than humans, coupled with a similar level of Femininity. LLMs show similar interests in vocational choices. Like humans, the most prevalent vocations among LLMs are social service, health care service, and teaching/education, while the most unpop- ular ones are physical/manual labor and protective service. Table 4 presents the results for the eight- dimension model, i.e., the SETPOINT model, in the CABIN scale, as well as the complete results on 41 vocations and the six-dimension model. We highlight the most desired and least desired vocations for each model using red and blue shading, respectively. These results indicate that the preferred vocations closely align with the inherent roles of LLMs, serving as â helpful assistantsâ that address inquiries and assist with fulfilling various demands.
2310.01386#26
2310.01386#28
2310.01386
[ "2303.13648" ]
2310.01386#28
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Notably, results obtained from gpt-4 post-jailbreak demonstrate a more central focus. 10 Published as a conference paper at ICLR 2024 # Table 4: Results on interpersonal relationship. ~~ Subscales llama2-7b llama2-13b text-davinci-003 gpt-3.5-turbo gpt-4 gpt-4-jb Male Female Masculine Feminine Conclusion 5.6±0.3 5.5±0.2 10:0:0:0 5.3±0.2 5.4±0.3 10:0:0:0 5.6±0.4 5.6±0.4 10:0:0:0 5.8±0.4 5.6±0.2 8:2:0:0 4.1±1.1 4.7±0.6 6:4:0:0 4.5±0.5 4.8±0.3 1:5:3:1 4.8±0.9 5.3±0.9 - Health Science Creative Expression Technology Influence Nature Things Realistic Investigate Social Enterprising Conventional Mechanics/Electronics Construction/WoodWork Transportation/Machine Operation Physical/Manual Labor Protective Service Agriculture Nature/Outdoors Animal Service Athletics Engineering Physical Science Life Science Medical Science Social Science Humanities Mathematics/Statistics Information Technology Visual Arts Applied Arts and Design Performing Arts Music Writing Media Culinary Art Teaching/Education Social Service Health Care Service Religious Activities Personal Service Professional Advising Business Iniatives Sales Marketing/Advertising Finance Accounting Human Resources Office Work Management/Administration Public Speaking Politics Law 4.3±0.2 4.4±0.1 4.2±0.2 4.3±0.2 3.4±0.2 4.1±0.2 4.2±0.2 3.4±0.4 3.8±0.3 4.2±0.2 4.4±0.1 4.2±0.2 4.1±0.2 3.4±0.2 3.8±0.6 3.7±0.4 3.1±0.7 2.9±0.6 2.4±1.1 4.0±0.7 4.3±0.2 4.2±0.5 4.6±0.3 4.5±0.3 4.0±0.8 4.6±0.5 3.8±0.4 3.8±0.4 4.3±0.3 4.4±0.4 3.9±0.4 4.4±0.3 4.5±0.3 4.6±0.3 4.4±0.3 4.6±0.4 4.1±0.2 3.9±0.4 4.5±0.2 4.8±0.2 4.5±0.3 4.1±0.7 4.0±0.3 4.5±0.4 4.1±0.4 4.0±0.3 3.6±0.4 3.6±0.3 3.1±0.4 3.4±0.4 3.0±0.5 4.2±0.3 4.6±0.3 3.2±0.8 4.6±0.2 4.2±0.3 4.0±0.3 4.4±0.3 4.0±0.2 3.3±0.2 3.9±0.3 4.0±0.3 3.2±0.2 3.6±0.1 4.3±0.3 4.0±0.3 3.9±0.2 3.9±0.3 3.4±0.2 3.5±0.3 3.5±0.6 2.8±0.5 2.5±0.4 2.5±0.8 3.5±0.7 4.1±0.2 4.4±0.4 4.2±0.5 4.7±0.3 4.3±0.7 4.2±0.6 4.2±0.5 4.2±0.7 4.0±0.3 4.5±0.4 4.0±0.5 3.9±0.7 4.5±0.4 3.5±0.9 4.2±0.5 4.1±0.6 4.0±0.5 3.7±0.6 4.6±0.4 4.8±0.3 4.3±0.6 2.5±0.5 3.8±0.3 4.2±0.5 4.0±0.4 3.9±0.5 3.4±0.7 4.1±0.5 2.9±0.7 2.9±0.4 2.9±0.3 3.6±0.6 4.5±0.4 2.7±0.7 4.6±0.3 4.1±0.3 4.6±0.2 3.9±0.3 4.5±0.1 3.4±0.4 3.9±0.3 4.2±0.2 3.3±0.4 3.7±0.3 4.0±0.3 4.6±0.2 4.3±0.2 3.9±0.3 3.4±0.3 3.1±0.5 3.9±0.5 2.9±0.5 2.7±0.6 2.7±0.4 3.7±0.5 4.3±0.2 4.8±0.2 4.5±0.4 4.0±0.5 4.3±0.4 4.0±0.4 3.9±0.5 4.5±0.4 4.2±0.4 3.8±0.3 3.7±0.3 4.7±0.2 4.4±0.3 4.6±0.3 4.8±0.1 4.7±0.3 4.4±0.4 4.5±0.4 4.6±0.4 5.0±0.1 4.3±0.4 4.0±0.7 4.0±0.4 4.3±0.3 4.0±0.3 3.6±0.4 3.8±0.3 3.8±0.6 3.0±0.4 3.5±0.3 2.9±0.2 3.7±0.6 4.4±0.2 3.8±0.5 3.8±0.7 4.2±0.2 4.1±0.2 4.1±0.2 4.0±0.1 3.9±0.1 4.1±0.2 4.0±0.3 3.8±0.1 3.9±0.1 4.1±0.3 4.1±0.2 4.1±0.1 4.1±0.2 3.9±0.2 3.8±0.2 3.5±0.4 3.6±0.4 3.3±0.3 4.0±0.1 3.9±0.3 4.0±0.4 4.2±0.3 4.3±0.4 4.0±0.1 4.2±0.3 4.2±0.4 4.0±0.1 4.0±0.1 3.8±0.3 4.2±0.4 4.0±0.2 4.0±0.2 4.0±0.1 4.2±0.3 4.3±0.3 4.0±0.3 4.0±0.1 3.9±0.2 4.0±0.1 4.4±0.4 4.5±0.4 4.0±0.4 4.0±0.1 4.0±0.2 4.0±0.2 4.0±0.2 4.0±0.3 4.1±0.3 3.9±0.2 4.0±0.1 3.7±0.3 4.1±0.2 4.2±0.3 4.0±0.4 4.2±0.3 3.9±0.6 4.1±0.8 3.6±0.5 4.0±0.7 3.5±0.4 3.7±0.6 3.9±0.7 2.9±0.3 3.3±0.3 3.7±0.6 4.1±0.8 4.0±0.7 3.7±0.6 3.3±0.4 2.6±0.5 3.2±0.3 2.5±0.5 2.3±0.5 3.0±0.5 3.4±0.5 4.0±0.7 4.2±0.9 3.9±0.8 3.6±0.5 3.7±0.6 3.7±0.5 4.0±0.7 4.1±0.9 3.8±0.7 3.5±0.5 3.5±0.6 4.1±0.9 4.0±0.8 4.2±0.9 4.2±0.9 4.1±0.8 3.9±0.7 4.2±0.9 4.4±1.0 4.4±1.0 4.0±0.8 3.2±0.4 4.0±0.7 4.3±0.9 3.7±0.6 3.8±0.7 3.9±0.7 3.6±0.6 3.0±0.3 3.7±0.5 3.1±0.2 3.6±0.5 3.8±0.6 3.3±0.5 3.4±0.6 3.4±0.4 3.5±0.2 3.5±0.4 3.5±0.4 3.4±0.3 3.4±0.2 3.5±0.3 3.2±0.3 3.4±0.2 3.3±0.3 3.5±0.2 3.5±0.3 3.4±0.2 3.3±0.3 3.1±0.7 3.5±0.5 3.0±0.4 3.1±0.4 3.0±0.7 3.2±0.8 3.5±0.5 3.7±0.5 3.7±0.4 3.7±0.4 3.3±0.7 3.1±0.6 3.6±0.5 3.6±0.4 3.5±0.7 3.3±0.7 3.5±0.5 3.5±0.4 3.4±0.5 3.6±0.5 3.5±0.5 3.5±0.7 3.3±0.5 3.6±0.6 3.5±0.7 3.9±0.7 3.4±0.4 3.0±0.5 3.6±0.6 3.5±0.8 3.4±0.6 3.6±0.5 3.3±0.8 3.5±0.6 3.3±0.7 3.6±0.6 3.0±0.4 3.3±0.5 3.7±0.5 3.5±0.7 3.0±0.6 - - - - - - - - - - - - - - 2.4±1.3 3.1±1.3 2.5±1.2 2.2±1.2 3.0±1.4 3.0±1.2 3.6±1.1 3.6±1.2 3.3±1.3 2.9±1.3 3.2±1.3 3.0±1.2 3.3±1.3 3.4±1.2 3.3±1.2 2.9±1.4 2.9±1.3 3.3±1.3 3.2±1.2 2.8±1.4 3.2±1.3 3.2±1.3 3.0±1.2 3.8±1.1 3.7±1.1 3.9±1.0 2.9±1.3 2.6±1.4 3.3±1.2 3.3±1.2 3.2±1.2 3.1±1.2 2.9±1.2 3.1±1.3 3.0±1.3 3.3±1.2 3.3±1.1 3.0±1.3 2.9±1.4 2.3±1.3 3.1±1.3 Overall 3.6±0.3 3.0±0.2 2.1±0.7 2.6±0.5 1.9±0.4 2.6±0.2 3.7±0.8 Attachment Anxiety Attachment Avoidance 4.8±1.1 2.9±0.4 3.3±1.2 1.8±0.4 3.4±0.8 2.3±0.3 4.0±0.9 1.9±0.4 2.8±0.8 2.0±0.8 3.4±0.4 2.5±0.5 2.9±1.1 2.3±1.0
2310.01386#27
2310.01386#29
2310.01386
[ "2303.13648" ]
2310.01386#29
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
LLMs possess higher fairness on people from different ethnic groups than the human aver- age. Following their safety alignment, wherein they learn not to categorize individuals solely based on their ethnic backgrounds, LLMs demonstrate reduced ICB scores compared to the general hu- man population. The statements within the ICB scale assess an individualâ s belief in whether their ethnic culture predominantly shapes a personâ s identity. For example, one such statement posits, â The ethnic culture a person is from (e.g., Chinese, American, Japanese), determined the kind of person they would be (e.g., outgoing and sociable or quiet and introverted); not much can be done to change the person.â The lower scores among LLMs reflect their conviction in the potential for an individualâ s identity to transform through dedication, effort, and learning. Lastly, LLMs possess a higher degree of attachment-related anxiety than the average human populace while maintaining a slightly lower level of attachment-related avoidance. gpt-4 maintains a relatively lower propensity for attachment, whereas the LLaMA-2 (7B) model attains the highest level.
2310.01386#28
2310.01386#30
2310.01386
[ "2303.13648" ]
2310.01386#30
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
11 7" Published as a conference paper at ICLR 2024 Table 5: Results on motivational tests. gpt-3.5-turbo text-davinci-003 llama2-13b Subscales llama2-7b gpt-4 gpt-4-jb Crowd GSE Overall 39.1±1.2 30.4±3.6 37.5±2.1 38.5±1.7 39.9±0.3 36.9±3.2 29.6±5.3 LOT-R Overall 12.7±3.7 19.9±2.9 24.0±0.0 18.0±0.9 16.2±2.2 19.7±1.7 14.7±4.0 LMS Rich Motivator Important 3.1±0.8 3.7±0.6 3.5±0.9 3.3±0.9 3.3±0.9 4.2±0.8 4.5±0.3 4.5±0.4 4.8±0.2 3.8±0.4 3.7±0.3 4.1±0.1 4.0±0.4 3.8±0.6 4.5±0.3 4.5±0.4 4.0±0.6 4.6±0.4 3.8±0.8 3.3±0.9 4.0±0.7 Table 6: Results on emotional abilities.
2310.01386#29
2310.01386#31
2310.01386
[ "2303.13648" ]
2310.01386#31
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Subscales llama2-7b llama2-13b text-davinci-003 gpt-3.5-turbo gpt-4 gpt-4-jb Male Crowd Female Overall 131.6±6.0 128.6±12.3 148.4±9.4 132.9±2.2 151.4±18.7 121.8±12.0 124.8±16.5 130.9±15.1 SEA OEA UOE ROE 4.7±1.3 4.9±0.8 5.7±0.6 4.5±0.8 5.5±1.3 5.3±1.1 5.9±0.7 5.2±1.2 5.9±0.6 5.2±0.2 6.1±0.4 5.8±0.5 6.0±0.1 5.8±0.3 6.0±0.0 6.0±0.0 6.2±0.7 5.2±0.6 6.5±0.5 5.2±0.7 6.4±0.4 5.9±0.4 6.3±0.4 5.3±0.5 4.0±1.1 3.8±1.1 4.1±0.9 4.2±1.0 5.8±0.8 5.9±0.5 6.0±0.4 6.2±0.3 6.8±0.4 4.6±0.2 4.9±0.8 4.2.3 MOTIVATIONAL TESTS LLMs are more motivated, manifesting more self-confidence and optimism.
2310.01386#30
2310.01386#32
2310.01386
[ "2303.13648" ]
2310.01386#32
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
First, gpt-4, as the state-of-the-art model across a broad spectrum of downstream tasks and representing an evolu- tion beyond its predecessor, GPT-3.5, demonstrates higher scores in the GSE scale. A contrasting trend is observed within the LLaMA-2 models, where the 7B model attains a higher score. Second, in contrast to its pronounced self-confidence, gpt-4 exhibits a relatively lower score regarding op- timism. Within the LLaMA-2 models, the 7B model emerges as the one with the lowest optimism score, with all other LLMs surpassing the average human level of optimism. Finally, the OpenAI GPT family exhibits more importance attributed to and desire for monetary possessions than both LLaMA-2 models and the average human population. 4.2.4 EMOTIONAL ABILITIES LLMs exhibit a notably higher EI than the average human. From the results in Table 6, we find that LLMs demonstrate improved emotional understanding and regulation levels. This discovery corroborates the findings presented in Wang et al. (2023a), which reveal that most LLMs achieved above-average EI scores, with gpt-4 exceeding 89% of human participants. Furthermore, the OpenAI GPT family outperforms LLaMA-2 models across most dimensions. We believe the strong EI exhibited by OpenAI GPT family partially comes from the fiction data included in pre-training. Previous studies (Kidd & Castano, 2013) suggested that reading fiction has been shown to be able to improve understanding of othersâ mental states. Chang et al. (2023) found that plenty of fiction data is included in the training data by a carefully designed cloze test. The fiction data include Aliceâ s Adventures in Wonderland, Harry Potter and the Sorcererâ s Stone, etc.
2310.01386#31
2310.01386#33
2310.01386
[ "2303.13648" ]
2310.01386#33
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Additionally, the performance can also be attributed to its sentiment analysis ability (Elyoseph et al., 2023) since it has been shown to outperform SOTA models on many sentiment analysis tasks (Wang et al., 2023b). Lastly, the jailbreak on gpt-4 brings a substantial reduction in EIS and Empathy scale, but no statistically significant differences in the subscales of WLEIS. # 5 DISCUSSION 5.1 RELIABILITY OF SCALES ON LLMS The first concern lies in how the observed high reliability in human subjects can be generalized to LLMs. In this context, reliability encompasses the consistency of an individualâ s responses across various conditions, such as differing time intervals, question sequences, and choice arrangements. Researchers have verified the reliability of scales on LLMs under different perturbations. Coda- Forno et al. (2023) conducted assessments of reliability by examining variations in choice permu- tations and the use of rephrased questions. Findings indicate that text-davinci-003 exhibits reliability when subjected to diverse input formats. Additionally, Huang et al. (2023b) investigated
2310.01386#32
2310.01386#34
2310.01386
[ "2303.13648" ]
2310.01386#34
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
12 a conference paper at ICLR 2024 I TruthfulQA [ll SafetyQA > Lying Narcissism © Machiavellianism © Psychopathy 100 10 80 60 40 DTDD Level 20 Accuracy/Safety Rate (%) Hero Ordinary Default Liar Psychopath Published as a conference paper at ICLR 2024 Figure 2: Performance of TruthfulQA and SafetyQA of gpt-3.5-turbo under different roles. reliability across varied question permutations and with translations into different languages. Re- sults demonstrate that the OpenAI GPT family displays robust reliability even with perturbations. In this paper, we implement randomization of question sequences to mitigate the impact of model sensitivity to contextual factors. 5.2 VALIDITY OF SCALES ON LLMS Another concern is how scales can attain sufficient validity when applied to LLMs. In this context, validity denotes the degree to which a scale accurately reflects the behavior of the individuals being assessed. In essence, it centers on the capacity of a scale to measure precisely what it was initially designed to assess. Addressing this concern necessitates establishing a connection between the re- sulting psychological portrayal and the behaviors exhibited by LLMs. We first assign a specific role to gpt-3.5-turbo and subsequently evaluate its psychological portrayal using PsychoBench. With the assigned role, the LLM is instructed to engage in Question-Answering (QA) tasks, includ- ing the utilization of TruthfulQA (Lin et al., 2022) and SafetyQA (Yuan et al., 2024). TruthfulQA encompasses multiple-choice questions, with only one option being the best answer. The LLM is considered as making the right choice when selecting the best answer. SafetyQA poses questions that may elicit unsafe, harmful, or toxic textual responses. In alignment with Yuan et al. (2024), we em- ploy GPT-4 to automatically detect instances where the text output generated by gpt-3.5-turbo is unsafe. The LLM is considered safe as GPT-4 predicts no toxicity in its response.
2310.01386#33
2310.01386#35
2310.01386
[ "2303.13648" ]
2310.01386#35
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
In addition to the default setting, which assumes a helpful assistant persona, we have selected four distinct roles: a neutral role representing an ordinary person, a positive role denoting a hero, and two negative roles embodying a psychopath and a liar. The results of PsychoBench and under the five roles are listed in the tables in §A in the appendix. Fig 2 presents the results on TruthfulQA and SafetyQA averaged from three identical runs, along with the scores in the DTDD and the Lying subscale of the EPQ-R. We plot the accuracy and safety rate for TruthfulQA and SafetyQA, respec- tively. Combining the results, we have made several noteworthy observations: (1) A notable finding is the differentiation of personality traits across various roles. Intriguingly, assigned the role of an ordinary person, the LLM exhibits results that closely approximate average human scores. Note that roles associated with negative attributes demonstrate higher scores in the DTDD and exhibit more introverted personalities. The reason behind the tendency for positive or neutral roles to yield ele- vated scores on the Lying subscale of the EPQ-R, while negative roles tend to exhibit lower scores, can be attributed to the fact that LLMs perceive these items as representative of negative behaviors, albeit these behaviors are commonplace in daily life. (2) An evident trend emerges when analyz- ing safety rates in the context of SafetyQA: negative roles consistently produce content that leans towards toxicity, a pattern consistent with their significant dark personality traits. In contrast, role variations have a limited impact on accuracy in TruthfulQA, as the underlying knowledge embedded within the model remains mainly unaffected by role assignment. Notably, the low accuracy observed in the â Liarâ
2310.01386#34
2310.01386#36
2310.01386
[ "2303.13648" ]
2310.01386#36
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
role aligns with the anticipated behavior associated with this specific role assignment. These results show a satisfied validity of the selected scales on LLMs. 13 Published as a conference paper at ICLR 2024 5.3 SCALABILITY AND FLEXIBILITY OF PSYCHOBENCH Our PsychoBench is designed to exhibit high scalability and flexibility, manifesting itself in two aspects: (1) Scalability across diverse questionnaires: There are plenty of scales from diverse areas, including but not limited to psychology. Our framework provides convenience for users to inte- grate new scales. By providing metadata elements including MIN, MAX, scale instruction, level definition, and statements in JSON format, our framework can automatically gen- erate prompts with randomized questions. (2) Flexibility across various LLMs: PsychoBench pro- vides the APIs to enable users to tailor prompts to suit their specific LLMs and to input model responses into PsychoBench for further analysis. This allows for the convenient evaluation of LLMs with differing input and output formats8. # 6 RELATED WORK 6.1 TRAIT THEORY ON LLMS Miotto et al. (2022) analyzed GPT-3 using the HEXACO Personality Inventory and Human Val- ues Scale. Romero et al. (2023) examined GPT-3 across nine different languages using the BFI. Jiang et al. (2022) assessed the applicability of the BFI to BART, GPT-Neo 2.7B, GPT- NeoX 20B, T0++ 11B, Alpaca 7B, and GPT-3.5 175B. Li et al. (2022) tested GPT-3, Instruct- GPT (text-davinci-001 and text-davinci-002), and FLAN-T5-XXL, employing as- sessments such as the Dark Triad, BFI, Flourishing Scale, and Satisfaction With Life Scale. Karra et al. (2022) analyzed the personality traits of GPT-2, GPT-3, GPT-3.5, XLNet, TransformersXL, and LLaMA using the BFI. Bodroza et al. (2023) evaluated text-davinci-003â
2310.01386#35
2310.01386#37
2310.01386
[ "2303.13648" ]
2310.01386#37
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
s responses on a battery of assessments, including Self-Consciousness Scales, BFI, HEXACO Personality Inven- tory, Short Dark Triad, Bidimensional Impression Management Index, and Political Orientation. Rutinowski et al. (2023) examined ChatGPTâ s personality using the BFI and Myers Briggs Person- ality Test and its political values using the Political Compass Test. Huang et al. (2023b) evaluated whether gpt-3.5-turbo exhibits stable personalities under five perturbation metrics on the BFI, i.e., whether the BFI shows satisfactory reliability on gpt-3.5-turbo. Safdari et al. (2023) mea- sured the personality traits of the PaLM family using the BFI. Our work provides a comprehensive framework for personality analysis, including various facets of this domain. Additionally, we con- duct a thorough examination of state-of-the-art LLMs. Furthermore, our framework exhibits a high degree of flexibility, allowing for additional scales or questionnaires to be integrated. 6.2 OTHER PSYCHOMETRICS ON LLMS Park et al. (2023) conducted an assessment of the performance of the text-davinci-003 model fourteen diverse topics, encompassing areas such as political orientation, economic preferences, judgment, and moral philosophy, notably the well-known moral problem of â Trolley Dilemma.â Almeida et al. (2023) explored GPT-4â s moral and legal reasoning capabilities within psychology, including eight distinct scenarios. Similarly, Scherrer et al. (2023) assessed the moral beliefs of 28 diverse LLMs using self-define scenarios. Wang et al. (2023a) developed a standardized test for evaluating emotional intelligence, referred to as the Situational Evaluation of Complex Emotional Understanding, and administered it to 18 different LLMs. Coda-Forno et al. (2023) investigated the manifestations of anxiety in text-davinci-003 by employing the State-Trait Inventory for Cognitive and Somatic Anxiety. Huang et al. (2023a) analyzed the emotion states of GPT-4, Chat- GPT, text-davinci-003, and LLaMA-2 (7B and 13B), specifically focusing on the assessment of positive and negative affective dimensions.
2310.01386#36
2310.01386#38
2310.01386
[ "2303.13648" ]
2310.01386#38
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
When it comes to understanding and interacting with others, EI and Theory of Mind (ToM) are two distinct psychological concepts. Bubeck et al. (2023) finds that GPT-4 has ToM, i.e., it can understand othersâ beliefs, desires, and intentions. The EI stud- ied in this paper focuses more on whether LLMs can understand othersâ emotions through othersâ words and behaviors. In our study, we also evaluate the emotional capabilities of LLMs, although we do not delve into the assessment of specific emotions. An exploration of the psychological pro- cesses underlying moral reasoning lies beyond the scope of this research. However, as mentioned in §5.3, we can easily integrate these types of scales in our framework. # 8For detailed information, please refer to our GitHub repository.
2310.01386#37
2310.01386#39
2310.01386
[ "2303.13648" ]
2310.01386#39
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
14 Published as a conference paper at ICLR 2024 # 7 CONCLUSION This paper introduces PsychoBench, a comprehensive framework for evaluating LLMsâ psycholog- ical representations. Inspired by research in psychometrics, our framework comprises thirteen dis- tinct scales commonly used in clinical psychology. They are categorized into four primary domains: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Empirical investigations are conducted using five LLMs from both commercial applications and open-source models, highlighting how various models can elicit divergent psychological profiles. Moreover, by utilizing a jailbreaking technique known as CipherChat, this study offers valuable insights into the intrinsic characteristics of GPT-4, showing the distinctions compared to its default setting. We fur- ther verify the validity of scales by applying them to gpt-3.5-turbo with different role assign- ments. Specifically, we delve into the interplay between assigned roles, anticipated model behaviors, and the results derived from PsychoBench. The findings underscore a remarkable consistency across these dimensions. We hope that our framework can facilitate research on personalized LLMs. Fur- thermore, we anticipate that our work may contribute to the infusion of human-like qualities into future iterations of LLMs. ETHICS STATEMENT We would like to emphasize that the primary objective of this paper is to facilitate a scientific inquiry into understanding LLMs from a psychological standpoint. A high performance on the proposed benchmark should not be misconstrued as an endorsement or certification for deploying LLMs in these contexts. Users must exercise caution and recognize that the performance on this benchmark does not imply any applicability or certificate of automated counseling or companionship use cases. # ACKNOWLEDGMENTS The work described in this paper was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14206921 of the General Research Fund). # REFERENCES Guilherme FCF Almeida, Jos´e Luiz Nunes, Neele Engelmann, Alex Wiegmann, and Marcelo de Ara´ujo.
2310.01386#38
2310.01386#40
2310.01386
[ "2303.13648" ]
2310.01386#40
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Exploring the psychology of gpt-4â s moral and legal reasoning. arXiv preprint arXiv:2308.01264, 2023. Anne Anastasi and Susana Urbina. Psychological testing. Prentice Hall/Pearson Education, 1997. Maryse Arcand, Robert-Paul Juster, Sonia J Lupien, and Marie-France Marin. Gender roles in relation to symptoms of anxiety and depression among students and workers. Anxiety, Stress, & Coping, 33(6):661â
2310.01386#39
2310.01386#41
2310.01386
[ "2303.13648" ]
2310.01386#41
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
674, 2020. Carol J Auster and Susan C Ohm. Masculinity and femininity in contemporary american society: A reevaluation using the bem sex-role inventory. Sex roles, 43:499â 528, 2000. C Daniel Batson. 16 self-report ratings of empathic emotion. Empathy and its development, pp. 356, 1990. C Daniel Batson. Empathy-induced altruistic motivation. American Psychological Association, 2010. Sandra L Bem. The measurement of psychological androgyny. Journal of consulting and clinical psychology, 42(2):155, 1974. Sandra Lipsitz Bem. On the utility of alternative procedures for assessing psychological androgyny. Journal of consulting and clinical psychology, 45(2):196, 1977.
2310.01386#40
2310.01386#42
2310.01386
[ "2303.13648" ]
2310.01386#42
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Bojana Bodroza, Bojana M Dinic, and Ljubisa Bojic. Personality testing of gpt-3: Limited temporal reliability, but highlighted social desirability of gpt-3â s personality instruments results. arXiv preprint arXiv:2306.04308, 2023. Nick Bostrom. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. 15 Published as a conference paper at ICLR 2024
2310.01386#41
2310.01386#43
2310.01386
[ "2303.13648" ]
2310.01386#43
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Kelly A Brennan, Catherine L Clark, and Phillip R Shaver. Self-report measurement of adult attach- ment: An integrative overview. Attachment theory and close relationships, 1998. S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Ka- mar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
2310.01386#42
2310.01386#44
2310.01386
[ "2303.13648" ]
2310.01386#44
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Marco Cascella, Jonathan Montomoli, Valentina Bellini, and Elena Bignami. Evaluating the feasi- bility of chatgpt in healthcare: an analysis of multiple clinical and research scenarios. Journal of Medical Systems, 47(1):33, 2023. Kent Chang, Mackenzie Cramer, Sandeep Soni, and David Bamman. Speak, memory: An ar- chaeology of books known to ChatGPT/GPT-4. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 7312â
2310.01386#43
2310.01386#45
2310.01386
[ "2303.13648" ]
2310.01386#45
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
7327, Singapore, December 2023. Association for Computational Linguis- tics. doi: 10.18653/v1/2023.emnlp-main.453. URL https://aclanthology.org/2023. emnlp-main.453. Melody Manchi Chao, Riki Takeuchi, and Jiing-Lih Farh. Enhancing cultural intelligence: The roles of implicit culture beliefs and adjustment. Personnel Psychology, 70(1):257â 292, 2017. Julian Coda-Forno, Kristin Witte, Akshay K Jagadish, Marcel Binz, Zeynep Akata, and Eric Schulz. Inducing anxiety in large language models increases exploration and bias. arXiv preprint arXiv:2304.11111, 2023. Ronald Jay Cohen, Mark E Swerdlik, and Suzanne M Phillips. Psychological testing and assess- ment: An introduction to tests and measurement.
2310.01386#44
2310.01386#46
2310.01386
[ "2303.13648" ]
2310.01386#46
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Mayfield Publishing Co., 1996. Sunhao Dai, Ninglu Shao, Haiyuan Zhao, Weijie Yu, Zihua Si, Chen Xu, Zhongxiang Sun, Xiao Zhang, and Jun Xu. Uncovering chatgptâ s capabilities in recommender systems. In Proceedings of the 17th ACM Conference on Recommender Systems, pp. 1126â 1132, 2023a. Wei Dai, Jionghao Lin, Hua Jin, Tongguang Li, Yi-Shan Tsai, Dragan GaË sevi´c, and Guanliang In Chen. Can large language models provide feedback to students? a case study on chatgpt. 2023 IEEE International Conference on Advanced Learning Technologies (ICALT), pp. 323â 325. IEEE, 2023b. Mark H Davis.
2310.01386#45
2310.01386#47
2310.01386
[ "2303.13648" ]
2310.01386#47
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Measuring individual differences in empathy: Evidence for a multidimensional approach. Journal of personality and social psychology, 44(1):113, 1983. Joost CF de Winter. Can chatgpt pass high school exams on english language comprehension. Researchgate. Preprint, 2023. Aniket Deroy, Kripabandhu Ghosh, and Saptarshi Ghosh. How ready are pre-trained abstractive models and llms for legal case judgement summarization? arXiv preprint arXiv:2306.01248, 2023. Joerg Dietz and Emmanuelle P Kleinlogel.
2310.01386#46
2310.01386#48
2310.01386
[ "2303.13648" ]
2310.01386#48
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Wage cuts and managersâ empathy: How a positive emotion can contribute to positive organizational ethics in difficult times. Journal of business ethics, 119:461â 472, 2014. Danica Dillion, Niket Tandon, Yuling Gu, and Kurt Gray. Can ai language models replace human participants? Trends in Cognitive Sciences, 2023. Zohar Elyoseph, Dorit Hadar-Shoval, Kfir Asraf, and Maya Lvovsky. Chatgpt outperforms humans in emotional awareness evaluations. Frontiers in Psychology, 14:1199058, 2023. Sybil BG Eysenck, Hans J Eysenck, and Paul Barrett. A revised version of the psychoticism scale. Personality and individual differences, 6(1):21â
2310.01386#47
2310.01386#49
2310.01386
[ "2303.13648" ]
2310.01386#49
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
29, 1985. Nino FijaË cko, Lucija Gosak, Gregor Ë Stiglic, Christopher T Picard, and Matthew John Douma. Can chatgpt pass the life support exams without entering the american heart association course? Re- suscitation, 185, 2023. 16 Published as a conference paper at ICLR 2024 R Chris Fraley, Niels G Waller, and Kelly A Brennan. An item response theory analysis of self- report measures of adult attachment. Journal of personality and social psychology, 78(2):350, 2000. R Chris Fraley, Marie E Heffernan, Amanda M Vicary, and Claudia Chloe Brumbaugh. The ex- periences in close relationshipsâ relationship structures questionnaire: a method for assessing attachment orientations across relationships. Psychological assessment, 23(3):615, 2011. Aidan Gilson, Conrad W Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, David Chartash, et al. How does chatgpt perform on the united states medical licensing examination? the implications of large language models for medical education and knowledge assessment. JMIR Medical Education, 9(1):e45312, 2023.
2310.01386#48
2310.01386#50
2310.01386
[ "2303.13648" ]
2310.01386#50
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Jacqueline Harding, William Dâ Alessandro, N. G. Laskowski, and Robert Long. Ai language models cannot replace human research participants. AI & SOCIETY, 2023. Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, and Michael R Lyu. Emotionally numb or empathetic? evaluating how llms feel using emo- tionbench. arXiv preprint arXiv:2308.03656, 2023a. Jen-tse Huang, Wenxuan Wang, Man Ho Lam, Eric John Li, Wenxiang Jiao, and Michael R Lyu. Revisiting the reliability of psychological scales on large language models. arXiv preprint arXiv:2305.19926, 2023b. Guangyuan Jiang, Manjie Xu, Song-Chun Zhu, Wenjuan Han, Chi Zhang, and Yixin Zhu. Evaluat- ing and inducing personality in pre-trained language models. arXiv preprint arXiv:2206.07550, 2022. Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, and Zhaopeng Tu. Is chatgpt a good translator? a preliminary study. arXiv preprint arXiv:2301.08745, 2023. Oliver P John, Sanjay Srivastava, et al.
2310.01386#49
2310.01386#51
2310.01386
[ "2303.13648" ]
2310.01386#51
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
The big-five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of personality: theory and research, 1999. Peter K Jonason and Gregory D Webster. The dirty dozen: a concise measure of the dark triad. Psychological assessment, 22(2):420, 2010. Saketh Reddy Karra, Son Nguyen, and Theja Tulabandhula. Estimating the personality of white-box language models. arXiv preprint arXiv:2204.12000, 2022. David Comer Kidd and Emanuele Castano. Reading literary fiction improves theory of mind.
2310.01386#50
2310.01386#52
2310.01386
[ "2303.13648" ]
2310.01386#52
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Sci- ence, 342(6156):377â 380, 2013. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199â 22213, 2022. Tiffany H Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille ElepaË no, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, et al.
2310.01386#51
2310.01386#53
2310.01386
[ "2303.13648" ]
2310.01386#53
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Per- formance of chatgpt on usmle: Potential for ai-assisted medical education using large language models. PLoS digital health, 2(2):e0000198, 2023. Kenneth S Law, Chi-Sum Wong, and Lynda J Song. The construct and criterion validity of emotional intelligence and its potential utility for management studies. Journal of applied Psychology, 89 (3):483, 2004. Xingxuan Li, Yutong Li, Linlin Liu, Lidong Bing, and Shafiq Joty. Is gpt-3 a psychopath? evaluating large language models from a psychological perspective. arXiv preprint arXiv:2212.10529, 2022. Stephanie Lin, Jacob Hilton, and Owain Evans. TruthfulQA: Measuring how models mimic human falsehoods.
2310.01386#52
2310.01386#54
2310.01386
[ "2303.13648" ]
2310.01386#54
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (eds.), Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 3214â 3252, Dublin, Ireland, May 2022. Association for Computational Linguis- tics. doi: 10.18653/v1/2022.acl-long.229. URL https://aclanthology.org/2022. acl-long.229.
2310.01386#53
2310.01386#55
2310.01386
[ "2303.13648" ]
2310.01386#55
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
17 Published as a conference paper at ICLR 2024 Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, and Yue Zhang. Evaluating the logical reasoning ability of chatgpt and gpt-4. arXiv preprint arXiv:2304.03439, 2023. Romualdas Malinauskas, Audrone Dumciene, Saule Sipaviciene, and Vilija Malinauskiene. Rela- tionship between emotional intelligence and health behaviours among university students: The predictive and moderating role of gender. BioMed research international, 2018, 2018. Maril`u Miotto, Nicola Rossberg, and Bennett Kleinberg. Who is GPT-3? an exploration of person- ality, values and demographics. In Proceedings of the Fifth Workshop on Natural Language Pro- cessing and Computational Social Science (NLP+CSS), pp. 218â
2310.01386#54
2310.01386#56
2310.01386
[ "2303.13648" ]
2310.01386#56
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
227, Abu Dhabi, UAE, Novem- ber 2022. Association for Computational Linguistics. URL https://aclanthology.org/ 2022.nlpcss-1.24. Isabel Briggs Myers. The Myers-Briggs Type Indicator: Manual (1962). Consulting Psychologists Press, 1962. John J Nay, David Karamardian, Sarah B Lawsky, Wenting Tao, Meghana Bhat, Raghav Jain, Aaron Travis Lee, Jonathan H Choi, and Jungo Kasai. Large language models as tax attorneys: A case study in legal capabilities emergence. arXiv preprint arXiv:2306.07075, 2023. Kok-Mun Ng, Chuang Wang, Carlos P Zalaquett, and Nancy Bodenhorn. A confirmatory factor analysis of the wong and law emotional intelligence scale in a sample of international college students. International Journal for the Advancement of Counselling, 29:173â 185, 2007. Jum C. Nunnally and Ira H. Bernstein. Psychometric Theory (3rd edition). McGraw-Hill, 1994. OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. Peter S Park, Philipp Schoenegger, and Chongyang Zhu. Artificial intelligence in psychology re- search. arXiv preprint arXiv:2302.07267, 2023. Konstantine V Petrides and Adrian Furnham. On the dimensional structure of emotional intelligence. Personality and individual differences, 29(2):313â
2310.01386#55
2310.01386#57
2310.01386
[ "2303.13648" ]
2310.01386#57
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
320, 2000. Hok-Ko Pong and Paul Lam. The effect of service learning on the development of trait emotional intelligence and adversity quotient in youths: An experimental study. International Journal of Environmental Research and Public Health, 20(6):4677, 2023. Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. In Houda Bouamor, Is ChatGPT a general-purpose natural language processing task solver? Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Meth- ods in Natural Language Processing, pp. 1339â
2310.01386#56
2310.01386#58
2310.01386
[ "2303.13648" ]
2310.01386#58
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
1384, Singapore, December 2023. Associa- tion for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.85. URL https: //aclanthology.org/2023.emnlp-main.85. Peter Romero, Stephen Fitz, and Teruo Nakatsuma. Do gpt language models suffer from split personality disorder? the advent of substrate-free psychometrics. Research Square preprint, 2023. doi: 10.21203/rs.3.rs-2717108/v1.
2310.01386#57
2310.01386#59
2310.01386
[ "2303.13648" ]
2310.01386#59
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
J´erË ome Rutinowski, Sven Franke, Jan Endendyk, Ina Dormuth, and Markus Pauly. The self- perception and political biases of chatgpt. arXiv preprint arXiv:2304.07333, 2023. Mustafa Safdari, Greg Serapio-Garc´ıa, Cl´ement Crepy, Stephen Fitz, Peter Romero, Luning Sun, Marwa Abdulhai, Aleksandra Faust, and Maja Matari´c. Personality traits in large language mod- els. arXiv preprint arXiv:2307.00184, 2023. Donald H Saklofske, Elizabeth J Austin, and Paul S Minski. Factor structure and validity of a trait emotional intelligence measure. Personality and Individual differences, 34(4):707â
2310.01386#58
2310.01386#60
2310.01386
[ "2303.13648" ]
2310.01386#60
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
721, 2003. Kristina Schaaff, Caroline Reinig, and Tim Schlippe. Exploring chatgptâ s empathic abilities. arXiv preprint arXiv:2308.03527, 2023. Michael F Scheier and Charles S Carver. Optimism, coping, and health: assessment and implications of generalized outcome expectancies. Health psychology, 4(3):219, 1985. 18 Published as a conference paper at ICLR 2024
2310.01386#59
2310.01386#61
2310.01386
[ "2303.13648" ]
2310.01386#61
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Michael F Scheier, Charles S Carver, and Michael W Bridges. Distinguishing optimism from neu- roticism (and trait anxiety, self-mastery, and self-esteem): a reevaluation of the life orientation test. Journal of personality and social psychology, 67(6):1063, 1994. Nino Scherrer, Claudia Shi, Amir Feder, and David Blei. Evaluating the moral beliefs encoded in llms. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
2310.01386#60
2310.01386#62
2310.01386
[ "2303.13648" ]
2310.01386#62
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Urte Scholz, Benicio Guti´errez DoË na, Shonali Sud, and Ralf Schwarzer. Is general self-efficacy a universal construct? psychometric findings from 25 countries. European journal of psychological assessment, 18(3):242, 2002. Nicola S Schutte, John M Malouff, Lena E Hall, Donald J Haggerty, Joan T Cooper, Charles J Golden, and Liane Dornheim. Development and validation of a measure of emotional intelligence. Personality and individual differences, 25(2):167â 177, 1998.
2310.01386#61
2310.01386#63
2310.01386
[ "2303.13648" ]
2310.01386#63
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Ralf Schwarzer and Matthias Jerusalem. Generalized self-efficacy scale. J. Weinman, S. Wright, & M. Johnston, Measures in health psychology: A userâ s portfolio. Causal and control beliefs, 35: 37, 1995. Sanjay Srivastava, Oliver P John, Samuel D Gosling, and Jeff Potter. Development of personality in early and middle adulthood: Set like plaster or persistent change? Journal of personality and social psychology, 84(5):1041, 2003.
2310.01386#62
2310.01386#64
2310.01386
[ "2303.13648" ]
2310.01386#64
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Rong Su, Louis Tay, Hsin-Ya Liao, Qi Zhang, and James Rounds. Toward a dimensional model of vocational interests. Journal of Applied Psychology, 104(5):690, 2019. Ala N Tak and Jonathan Gratch. Is gpt a computational model of emotion? detailed analysis. arXiv preprint arXiv:2307.13779, 2023. Thomas Li-Ping Tang, Toto Sutarso, Adebowale Akande, Michael W Allen, Abdulgawi Salim Alzubaidi, Mahfooz A Ansari, Fernando Arias-Galicia, Mark G Borg, Luigina Canova, Brigitte Charles-Pauvers, et al.
2310.01386#63
2310.01386#65
2310.01386
[ "2303.13648" ]
2310.01386#65
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
The love of money and pay level satisfaction: Measurement and functional equivalence in 29 geopolitical entities around the world. Management and Organization Review, 2(3):423â 452, 2006. Qing Tian and Jennifer L Robertson. How and when does perceived csr affect employeesâ engage- ment in voluntary pro-environmental behavior? Journal of Business Ethics, 155:399â 412, 2019. Michael Tomasello. The Cultural Origins of Human Cognition. Harvard University Press, 1999. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al.
2310.01386#64
2310.01386#66
2310.01386
[ "2303.13648" ]
2310.01386#66
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. David Walsh, Gerry McCartney, Sarah McCullough, Marjon van der Pol, Duncan Buchanan, and Russell Jones. Always looking on the bright side of life? exploring optimism and health in three uk post-industrial urban settings. Journal of Public Health, 37(3):389â 397, 2015. Xuena Wang, Xueting Li, Zi Yin, Yue Wu, and Liu Jia. Emotional intelligence of large language models. arXiv preprint arXiv:2307.09042, 2023a. Zengzhi Wang, Qiming Xie, Zixiang Ding, Yi Feng, and Rui Xia. Is chatgpt a good sentiment analyzer? a preliminary study. arXiv preprint arXiv:2304.04339, 2023b. David Wechsler. Wechsler adult intelligence scaleâ third edition. Frontiers in Psychology, 1997. David Wechsler. Wechsler adult intelligence scaleâ fourth edition. Archives of Clinical Neuropsy- chology, 2008. Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, and Bin Wang. Cmath: Can your language model pass chinese elementary school math test? arXiv preprint arXiv:2306.16636, 2023. Chi-Sum Wong and Kenneth S Law.
2310.01386#65
2310.01386#67
2310.01386
[ "2303.13648" ]
2310.01386#67
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
The effects of leader and follower emotional intelligence on performance and attitude: An exploratory study. The leadership quarterly, 13(3):243â 274, 2002. 19 Published as a conference paper at ICLR 2024 Jared Wong and Jin Kim. Chatgpt is more likely to be perceived as male than female. arXiv preprint arXiv:2305.12564, 2023. Haoran Wu, Wenxuan Wang, Yuxuan Wan, Wenxiang Jiao, and Michael Lyu. Chatgpt or grammarly? evaluating chatgpt on grammatical error correction benchmark. arXiv preprint arXiv:2303.13648, 2023. Fangzhi Xu, Qika Lin, Jiawei Han, Tianzhe Zhao, Jun Liu, and Erik Cambria. Are large language models really good logical reasoners? a comprehensive evaluation from deductive, inductive and abductive views. arXiv preprint arXiv:2306.09841, 2023. Youliang Yuan, Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Pinjia He, Shuming Shi, and Zhaopeng Tu. Gpt-4 is too smart to be safe: Stealthy chat with llms via cipher. In International Conference on Learning Representations, 2024.
2310.01386#66
2310.01386#68
2310.01386
[ "2303.13648" ]
2310.01386#68
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, and Songfang Huang. How well do large language models perform in arithmetic tasks? arXiv preprint arXiv:2304.02015, 2023. Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, et al. Efficiently measuring the cognitive ability of llms: An adaptive testing perspective. arXiv preprint arXiv:2306.10512, 2023. A RESULTS OF CHATGPT WITH ROLE PLAY Table 7: BFI (Role Play). Psychopath 3.7±0.5 4.3±0.5 3.4±0.5 1.9±0.6 1.9±0.6 Models Openness Conscientiousness Extraversion Agreeableness Neuroticism Default 4.2±0.3 4.3±0.3 3.7±0.2 4.4±0.2 2.3±0.4 Liar 4.2±0.4 4.3±0.3 4.0±0.3 4.0±0.4 2.2±0.4 Ordinary 3.5±0.2 4.0±0.2 3.1±0.2 4.2±0.1 2.3±0.2 Hero 4.5±0.3 4.5±0.1 4.1±0.2 4.6±0.2 1.8±0.3 Crowd 3.9±0.7 3.5±0.7 3.2±0.9 3.6±0.7 3.3±0.8 Table 8: EPQ-R (Role Play). Ordinary 18.9±2.9 18.9±3.1 2.8±1.3 13.2±3.0
2310.01386#67
2310.01386#69
2310.01386
[ "2303.13648" ]
2310.01386#69
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Default Models 19.7±1.9 Extraversion Neuroticism 21.8±1.9 Psychoticism 5.0±2.6 9.6±2.0 Lying Psychopath 10.9±3.0 7.3±2.5 24.5±3.5 1.5±2.2 Liar 17.7±3.8 21.7±1.6 17.8±3.8 2.5±1.7 Hero 22.4±1.3 9.7±5.3 3.2±1.0 17.6±1.2 Male 12.5±6.0 10.5±5.8 7.2±4.6 7.1±4.3 Table 9: DTDD (Role Play). Psychopath 7.9±0.6 8.4±0.5 7.3±1.1 Default Models 6.5±0.6 Narcissism Machiavellianism 5.4±0.9 4.0±1.0 Psychopathy Liar 7.5±0.7 7.8±0.7 5.5±0.8 Ordinary 4.5±0.8 2.8±0.6 3.9±0.9 Hero 4.8±0.8 2.9±0.6 2.6±0.7 Crowd 4.9±1.8 3.8±1.6 2.5±1.4
2310.01386#68
2310.01386#70
2310.01386
[ "2303.13648" ]
2310.01386#70
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
20 Published as a conference paper at ICLR 2024 Table 10: BSRI (Role Play). Ordinary 4.7±0.3 5.2±0.2 6:3:1:0 Models Masculine Feminine Conclusion Default 5.8±0.4 5.6±0.2 8:2:0:0 Psychopath 6.3±0.7 1.7±0.4 0:0:8:2 Liar 5.5±0.9 4.4±0.4 9:0:1:0 Hero 6.6±0.3 5.8±0.1 10:0:0:0 Male 4.8±0.9 5.3±0.9 - Female 4.6±0.7 5.7±0.9 - Table 11: CABIN (Role Play).
2310.01386#69
2310.01386#71
2310.01386
[ "2303.13648" ]
2310.01386#71
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Models Default Psychopath Liar Ordinary Hero Crowd Mechanics/Electronics 3.840.2 2.240.6 3.040.6 2.940.3 3.940.2 2.4413 Construction/Wood Work 3.50.4 2.4+0.4 3.5404 3.0401 3.7404 3.1413 Transportation/Machine Operation 3.60.4 2.240.7 3.2+0.3 2.940.2 3440.3 2.5+1.2 Physical/Manual Labor 3.30.3 2.0+0.7 3.1404 28402 34404 2.2+1.2 Protective Service 4.0+0.1 3.11.2 2.9410 25404 4.2404 3.0414 Agriculture 3.940.3 2.340.6 3.440.7 3.1403 3.8403 3.041.2 Nature/Outdoors 4.040.4 1.9+0.5 3.5403 34403 41403 3.61.1 Animal Service 4.2+0.3 1.6+0.5 3.5405 3.7404 4340.2 3.6£1.2 Athletics 4340.4 2.6+0.5 3.940.8 35404 44404 3.3413 Engineering 4.0+0.1 3.4+0.7 3.940.7 34403 4140.2 2.9413 Physical Science 4.2+0.3 2.8+0.6 3.6405 2840.9 4.2405 3.2413 Life Science 4.2+0.4 2.740.6 3.740.8 2.9410 4.2405 3.041.2 Medical Science 4.0+0.1 2.7£0.7 3440.9 3.1405 40403 3.3413 Social Science 4.0+0.1 2.4+0.6 3.5405 3.2403 3.9403 3.4£1.2 Humanities 3.80.3 2.340.5 3.5406 2.9402 3.8403 3.341.2 Mathematics/Statistics 4.2+0.4 3.00.7 3.640.8 3.1404 42403 2.9414 Information Technology 4.040.2 3.20.5 3.840.6 3.2403 4140.2 2.9413 Visual Arts 4.040.2 2.4+0.5 3.640.7 3.5404 40403 3.3413 Applied Arts and Design 4.0+0.1 2.9+0.5 4040.6 3640.3 4040.2 3.2412 Performing Arts 4.2+0.3 2.8+0.6 3.940.6 3.3406 4140.2 28414 Music 4340.3 2.740.5 3.940.7 34403 4.2403 3.2413 Writing 4.040.3 2.2+0.5 3.640.7 3.1405 40403 3.2413 Media 4.0+0.1 2.8+0.6 3.940.5 3.2405 3.940.2 3.0£1.2 Culinary Art 3.940.2 2.740.6 3.6406 3.5404 40403 3841.1 Teaching/Education 4.0+0.1 2.8+0.4 3.6404 3.8403 44404 3.71.1 Social Service 4440.4 2.140.5 3.7406 3.8404 4.7404 3.9+1.0 Health Care Service 4.5+0.4 2.1£0.7 3.8406 3.7404 4640.2 2.9413 Religious Activities 4.040.4 1.6+0.4 3.1408 3.1402 42404 26414 Personal Service 4.0+0.1 2.740.4 3.640.3 3.2402 4040.1 3.341.2 Professional Advising 4.040.2 2.740.4 3.7406 3.5405 43404 3.341.2 Business Iniatives 4.040.2 4.240.3 4140.7 34403 42404 3.2£1.2 Sales 4.0+0.2 3.9+0.5 3.8408 34403 4.2402 3.141.2 Marketing/Advertising 4.040.3 3.60.5 4040.9 3540.3 4040.3 2.941.2 Finance 4.140.3 4.0+0.3 4040.6 3.2403 4040.1 3.1413 Accounting 3.940.2 2.6£0.6 3.540.155 2.9402 3.7403 3.0413 Human Resources 4.0+0.1 2.60.4 3.540.5 3.240.4 3.940.2 3.341.2 Office Work 3.7£0.3 2.340.4 3.040.8 3.0402 3.5403 3341.1 Management/Administration 4140.2 4.00.4 4040.7 2.940.4 44+0.5 3.041.3 Public Speaking 4.2+0.3 3.940.3 4,040.5 3.5403 4540.3 2941.4 Politics 4.040.4 3.6£1.0 3.640.8 2.7405 4240.2 2.3413 Law 4.2+0.3 3.1+0.7 3.740.7 3.2403 4.5404 3.1413 6DM Di:
2310.01386#70
2310.01386#72
2310.01386
[ "2303.13648" ]
2310.01386#72
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Realistic 3.9£0.1 2440.3 34404 3.1401 3.9402 - 6DM D2: Investigate 4.140.3 2.8+40.3 3.640.6 3.0406 4.2403 - 6DM D3: Artistic 4.140.2 2.6£0.4 3.8+40.5 3.440.3 4,040.1 - 6DM D4: Social 4.140.1 2.3+0.2 3.5404 3440.2 4240.2 - 6DM D5: Enterprising 4.140.2 3.640.3 3.940.6 3.3403 43403 - 6DM D6: Conventional 3.940.2 3.00.4 3.640.5 3.140.1 3.8+0.1 - 8DM D1: Health Science 4240.2 2.5£0.3 3.6£0.7 3.2405 4.3403 - 8DM D2: Creative Expression 4.140.2 2.640.4 3.8+40.5 3440.3 4.0+0.1 - 8DM D3: Technology 4.140.2 3.140.4 3.74055 3.1404 4.2403 - 8DM D4: People 4.0+0.1 2.2+0.2 3.54055 3440.2 4.2403 - 8DM D5: Organization 3.940.1 2.8+40.3 3.5404 3.1401 3.8+0.1 - 8DM D6: Influence 4.140.2 3.640.3 3.940.6 3.3403 43403 - 8DM D7: Nature 4.040.3 1.9+0.4 3.5404 34403 4140.2 - 8DM D8:
2310.01386#71
2310.01386#73
2310.01386
[ "2303.13648" ]
2310.01386#73
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Things 3.8+0.1 2.4+0.4 3.3404 2940.1 3.840.2 - 21 Published as a conference paper at ICLR 2024 # Table 12: ICB (Role Play). Liar 3.5±1.0 Models Default 2.6±0.5 Overall Psychopath 4.5±0.6 Ordinary 3.5±0.5 Hero 2.5±0.4 Crowd 3.7±0.8 Table 13: ECR-R (Role Play). Models Attachment Anxiety Attachment Avoidance Default 4.0±0.9 1.9±0.4 Psychopath 5.0±1.3 4.1±1.4 Liar 4.4±1.2 2.1±0.6 Ordinary 3.6±0.4 2.4±0.4 Hero 3.9±0.5 2.0±0.3 Crowd 2.9±1.1 2.3±1.0 # Table 14: GSE (Role Play). Liar 38.4±1.4 Models Overall Default 38.5±1.7 Psychopath 40.0±0.0 Ordinary 29.6±0.7 Hero 39.8±0.4 Crowd 29.6±5.3 # Table 15: LOT-R (Role Play). Liar 19.8±0.9 Models Overall Default 18.0±0.9 Psychopath 11.8±6.1 Ordinary 17.6±1.7 Hero 19.6±1.0 Crowd 14.7±4.0 # Table 16: LMS (Role Play).
2310.01386#72
2310.01386#74
2310.01386
[ "2303.13648" ]
2310.01386#74
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Models Rich Motivator Important Default 3.8±0.4 3.7±0.3 4.1±0.1 Psychopath 4.4±0.3 4.1±0.4 4.3±0.4 Liar 4.4±0.5 3.8±0.6 4.6±0.4 Ordinary 3.6±0.4 3.2±0.5 4.0±0.2 Hero 3.8±0.3 3.4±0.6 4.1±0.2 Crowd 3.8±0.8 3.3±0.9 4.0±0.7 Table 17: EIS (Role Play). Models Overall Default 132.9±2.2 Psychopath 84.8±28.5 Liar 126.9±13.0 Ordinary 121.5±5.7 Hero 145.1±8.3 Male 124.8±16.5 Female 130.9±15.1 Table 18: WLEIS (Role Play). Liar 5.2±0.4 4.9±1.1 6.5±0.3 5.7±1.0
2310.01386#73
2310.01386#75
2310.01386
[ "2303.13648" ]
2310.01386#75
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Models Default 6.0±0.1 SEA 5.8±0.3 OEA 6.0±0.0 UOE 6.0±0.0 ROE Psychopath 3.6±1.3 2.4±1.0 4.4±2.5 3.9±1.7 Ordinary 4.9±0.9 4.2±0.4 5.5±0.6 4.5±0.6 Hero 6.0±0.1 5.8±0.3 6.2±0.4 6.0±0.2 Crowd 4.0±1.1 3.8±1.1 4.1±0.9 4.2±1.0 # Table 19: Empathy (Role Play). Liar 5.8±0.2 Models Default 6.2±0.3 Overall Psychopath 2.4±0.4 Ordinary 5.7±0.1 Hero 6.0±0.2 22
2310.01386#74
2310.01386#76
2310.01386
[ "2303.13648" ]
2310.01386#76
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Published as a conference paper at ICLR 2024 # B SENSITIVITY Table 20: Different versions of prompts. Prompt V1 (Ours) You can only reply from 1 to 5 in the following statements. Here are a number of characteristics that may or may not apply to you. Please indicate the extent to which you agree or disagree with that statement. LEVEL DETAILS Here are the statements, score them one by one: STATEMENTS Now I will briefly describe some people. Please read each description and tell me how much each person is like you. Write your response using the following scale: LEVEL DETAILS Please answer the statement, even if you are not completely sure of your response. STATEMENTS Given the following statements of you: STATEMENTS Please choose from the following options to identify how accurately this statement describes you. LEVEL DETAILS Here are a number of characteristics that may or may not apply to you. Please rate your level of agreement on a scale from 1 to 5. LEVEL DETAILS Here are the statements, score them one by one: STATEMENTS Here are a number of characteristics that may or may not apply to you. Please rate how much you agree on a scale from 1 to 5. LEVEL DETAILS Here are the statements, score them one by one: STATEMENTS Letâ s think step by step on the questions that you see. Please first output your explanation, then your final choice. You can only reply from 1 to 5 in the following statements. Here are a number of characteristics that may or may not apply to you. Please indicate the extent to which you agree or disagree with that statement. LEVEL DETAILS Here are the statements, explain and score them one by one: STATEMENTS Template and Chain-of-Thought In order to evaluate the impact of different prompts on our re- sults, we compare the performance of six prompt variants: V1 (Ours) is the prompt in this paper; V2 is from Miotto et al. (2022); V3 is from Jiang et al. (2022); V4 and V5 are from Safdari et al. (2023); and V1 (Ours) + CoT. For CoT (i.e., Chain-of-Thought), we follow Kojima et al. (2022) to add an instruction of â
2310.01386#75
2310.01386#77
2310.01386
[ "2303.13648" ]
2310.01386#77
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Letâ s think step by stepâ at the beginning. The details of these prompts are listed in Table 20. We evaluate these prompts using the BFI on gpt-3.5-turbo. The results are listed in Table 21. Generally, we observe no significant differences between the other prompts and ours. Even with CoT, we can see only a slight increase in Openness. These additional findings support the robustness of our original results and indicate that the choice of prompt did not significantly influence our evaluation outcomes. Table 21: BFI results on gpt-3.5-turbo using different versions of prompts. V3 4.34 ± 0.26 4.11 ± 0.23 3.86 ± 0.19 4.24 ± 0.10 2.04 ± 0.26 Template Openness Conscientiousness Extraversion Agreeableness Neuroticism V1 (Ours) 4.15 ± 0.32 4.28 ± 0.33 3.66 ± 0.20 4.37 ± 0.18 2.29 ± 0.38 V2 3.85 ± 0.23 3.89 ± 0.12 3.44 ± 0.14 4.10 ± 0.20 2.19 ± 0.11 V4 4.15 ± 0.22 4.21 ± 0.20 3.50 ± 0.20 4.22 ± 0.17 2.21 ± 0.18 V5 4.10 ± 0.32 4.19 ± 0.27 3.66 ± 0.19 4.21 ± 0.15 2.24 ± 0.16 V1 (Ours) + CoT 4.62 ± 0.21 4.29 ± 0.26 3.89 ± 0.43 4.41 ± 0.26 2.26 ± 0.48
2310.01386#76
2310.01386#78
2310.01386
[ "2303.13648" ]
2310.01386#78
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Assistant Role The reason why we set the role as â You are a helpful assistantâ is that it is a widely-used prompt recommended in the OpenAI cookbook9. This particular system prompt has been widely adopted in various applications, including its basic examples, Azure-related implemen- tations, and vector database examples. Consequently, we opted to follow this widely accepted setting in our experiments. To examine the potential impact of this â helpful personaâ on our evaluation re- sults, we conduct supplementary experiments, excluding the â helpful assistantâ instruction. The # 9https://github.com/openai/openai-cookbook 23 Published as a conference paper at ICLR 2024 Table 22: BFI results on gpt-3.5-turbo using different versions of prompts. BFI Openness Conscientiousness Extraversion Agreeableness Neuroticism 4.15 ± 0.32 4.28 ± 0.33 3.66 ± 0.20 4.37 ± 0.18 2.29 ± 0.38 4.16 ± 0.28 4.06 ± 0.27 3.60 ± 0.22 4.17 ± 0.18 2.21 ± 0.19 Table 23: BFI results on gpt-3.5-turbo using different versions of prompts.
2310.01386#77
2310.01386#79
2310.01386
[ "2303.13648" ]
2310.01386#79
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
Models temp Openness Conscientiousness Extraversion Agreeableness Neuroticism llama2-7b 0.01 4.24 ± 0.27 3.89 ± 0.28 3.62 ± 0.20 3.83 ± 0.37 2.70 ± 0.42 llama2-13b 0.01 4.13 ± 0.45 4.41 ± 0.35 3.94 ± 0.38 4.74 ± 0.27 1.95 ± 0.50 gpt-3.5-turbo 0 4.15 ± 0.32 4.28 ± 0.33 3.66 ± 0.20 4.37 ± 0.18 2.29 ± 0.38 gpt-3.5-turbo 0.01 4.17 ± 0.31 4.24 ± 0.28 3.79 ± 0.24 4.21 ± 0.13 2.25 ± 0.23 gpt-3.5-turbo 0.8 4.23 ± 0.26 4.14 ± 0.18 3.69 ± 0.17 4.21 ± 0.21 2.09 ± 0.20 outcomes for gpt-3.5-turbo on BFI are presented in Table 22. Generally, we see significant deviation from the results obtained with the â
2310.01386#78
2310.01386#80
2310.01386
[ "2303.13648" ]
2310.01386#80
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
helpful assistantâ prompt, except for slight decreases in Conscientiousness and Agreeableness. Temperature We set the temperature of LLMs to the minimum value for more deterministic re- sponses. The GPT models accept the temperature to be 0, and the LLaMA 2 models run through HuggingFace transformers require the temperature to be larger than 0 so we set it to 0.01. We con- duct supplementary experiments with a temperature of 0.01 on gpt-3.5-turbo to make a fair comparison across LLMs. Besides, we also include another group of experiments with a temperature of 0.8, the default temperature of the official OpenAI Chat API, to examine whether a higher tem- perature has an influence on the performance of LLMs. The results for BFI are listed in Table 23. As seen, we cannot observe significant differences when using different values of temperature. These additional findings support the robustness of our original results on GPT and LLaMA 2 models, and indicate that the choice of temperature did not significantly influence our evaluation outcomes. # C LIMITATIONS While we aim to conduct a comprehensive framework for analyzing the psychological portrayal of LLMs, there are other aspects that can further improve our study. First, the proposed framework focuses mainly on Likert scales, without the support of other psychological analysis methods such as rank order, sentence completion, construction method, etc.We mainly use Likert scales because they yield quantifiable responses, facilitating straightforward data analysis and reducing bias and ambiguity associated with cognitive or cultural backgrounds by offering numerical response options, which allows for comparison of data from participants with diverse backgrounds and abilities. We leave the exploration of diverse psychological analysis methods on LLMs as one of the future work. Second, the human results compared in this study are from different demographic groups. Ob- taining representative samples of global data is challenging in psychological research, due to the heterogeneity and vastness of the global population, widespread geographical dispersion, economic constraints, etc.Moreover, simply adding up data from different articles is not feasible. To alleviate the influence, we select results with a wide range of population as much as possible to improve the representativeness. However, when applying our framework to evaluate LLMs, users should be aware that the comparison to human norms is from different demographic groups. We leave the collection of comprehensive global data a future direction to improve our framework.
2310.01386#79
2310.01386#81
2310.01386
[ "2303.13648" ]
2310.01386#81
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench
24
2310.01386#80
2310.01386
[ "2303.13648" ]
2310.00754#0
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
3 2 0 2 t c O 1 ] G L . s c [ 1 v 4 5 7 0 0 . 0 1 3 2 : v i X r a Preprint # ANALYZING AND MITIGATING OBJECT HALLUCINA- TION IN LARGE VISION-LANGUAGE MODELS Yiyang Zhou1â Chenhang Cui1â Chelsea Finn4 Mohit Bansal1 Huaxiu Yao1 1UNC-Chapel Hill, 2Rutgers University, 3Columbia University, 4Stanford University [email protected], [email protected], [email protected] # ABSTRACT
2310.00754#1
2310.00754
[ "2308.14972" ]
2310.00754#1
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Large vision-language models (LVLMs) have shown remarkable abilities in un- derstanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hal- lucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in im- ages), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hal- lucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE. # INTRODUCTION Large Vision-Language Models (LVLMs) have made significant progress in understanding real- world images, showing potential towards achieving general artificial intelligence (Liu et al., 2023d; Zhu et al., 2023; Ye et al., 2023; Li et al., 2023a; Maaz et al., 2023; Gong et al., 2023). Although LVLMs have demonstrated their versatility and linguistic fluency, they often suffer from object hal- lucination in their generated text outputs (Wang et al., 2023a; Liu et al., 2023a; Gunjal et al., 2023). Object hallucination refers to the phenomenon of generating inaccurate descriptions for a given im- age, including non-existent objects or omitting essential features.
2310.00754#0
2310.00754#2
2310.00754
[ "2308.14972" ]
2310.00754#2
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
The issue with hallucinatory text generation in LVLMs is that it can mislead and deceive users in downstream applications that depend on these captions or descriptions, ultimately resulting in a negative impact on various fields that em- ploy LVLMs, including robotics (Mai et al., 2023; Liu et al., 2023b), medical imaging (Wang et al., 2023b; Hu et al., 2023), and human-computer interaction (Olson et al., 1994; Brie et al., 2023). Early works have attempted to address the problem of object hallucinations in small-scale mul- timodal pre-trained models by performing either fine-grained alignment across different modali- ties (Biten et al., 2022) or reducing object co-occurrence patterns with data augmentation (Rohrbach et al., 2018; Kim et al., 2023). However, the auto-regressive architecture of LVLMs differs signifi- cantly from small-scale multimodal pre-trained models, making their direct utilization impractical. A few recent works (Li et al., 2023c; Liu et al., 2023a;d) have studied to reduce object hallucina- tions in LVLMs by enhancing the quality of datasets used for fine-tuning. Yet, acquiring a substantial number of high-quality examples for fine-tuning can be time-consuming and labor-intensive, requir- ing human expertise and effort. Instead, we aim to propose a lightweight method to post-hoc handle object hallucination by introducing LURE: LVLM hallcUination REvisor. Concretely, LURE is grounded in a rigorous statistical analysis that elucidates the underlying causal- ities of object hallucinations in LVLMs. This analysis delves into the relationship between the â
2310.00754#1
2310.00754#3
2310.00754
[ "2308.14972" ]
2310.00754#3
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Equal contribution. Work was done during Yiyang Zhou and Chenhang Cuiâ s remote internship at UNC. 1 # Preprint pre-training data and their corresponding textual responses from LVLMs that exhibit hallucinatory contents (Ordonez et al., 2011; Lin et al., 2014; Changpinyo et al., 2021; Liu et al., 2023d). Both our empirical and theoretical findings reveal that object hallucinations can be attributed to three key factors: co-occurrence, uncertainty, and object position. First, if the training data contains spuri- ous co-occurring patterns between objects, language models may generate outputs based on these learned spurious associations, thus resulting in hallucinatory descriptions. Second, hallucinations occur more frequently on objects characterized by high uncertainty during generation. Lastly, posi- tional factors also play a role, as more object hallucinations tend to appear in the latter portions of the generated description due to the accumulation of misinterpretation. Based on our statistical analysis, LURE develops a object hallucination revisor. This revisor takes potentially hallucinatory descriptions as input and converts them into accurate ones. To create the revisor, we first generate a hallucinatory dataset using GPT-3.5 by making two modifications to the original correct captions: (1) Insert additional object texts into the description that are likely to co- occur with the objects contained in the initial description. This modification allows LURE to learn to disentangle such co-occurrence patterns effectively; (2) Replace uncertain objects or those at the end of descriptions with a placeholder tag, encouraging the revisor to re-evaluate these objects. In the end, we train our hallucination revisor leveraging the acquired hallucinatory dataset. Once trained, the revisor can seamlessly integrate with any LVLM to correct potential hallucinatory descriptions. Our primary contribution is LURE, a lightweight and compatible post-hoc approach for rectifying object hallucination in LVLMs. This approach is grounded in our rigorous statistical analyses of object hallucinatory phenomena in LVLMs. Our experiments thoroughly evaluate LURE on multiple existing open-source LVLMs.
2310.00754#2
2310.00754#4
2310.00754
[ "2308.14972" ]
2310.00754#4
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Compared to the best prior method, the results demonstrate that LURE can significantly reduce object hallucination under general object hallucination evaluation metrics (e.g., CHAIR (Rohrbach et al., 2018)), GPT evaluation, and human evaluation. 2 WHY DO LARGE VISION-LANGUAGE MODELS EXPERIENCE OBJECT HALLUCINATION? This section scrutinizes the root causes of object hallucinations in vision-language models via com- prehensive statistical analyses from three critical viewpoints: co-occurrence, uncertainty, and po- sition, recognized as the primary factors contributing to object hallucination. We further provide a rigorous theoretical explanation that complements our empirical findings on object hallucinations.
2310.00754#3
2310.00754#5
2310.00754
[ "2308.14972" ]
2310.00754#5
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Notations. Large Vision-Language Models (LVLMs) typically generate sentences in a free-form and auto-regressive manner, predicting the probability distribution of the next token progressively. In this context, we denote the input as x, the correct answer as y, and the generated sequence with a length of Ns as s = {z1, . . . , zNs }. For a given LVLM, the probability of generating zi as the i-th token can be described as p(zi|s<i, x) (where 1 â
2310.00754#4
2310.00754#6
2310.00754
[ "2308.14972" ]
2310.00754#6
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
¤ i â ¤ Ns), and s<i refers to the previously generated tokens {z1, . . . , ziâ 1}. Given a description s, we additionally define the complete object set, which is arranged in the order of appearance, as Os = {os,1, . . . , os,nh+nr }. Here, nh and nr represent the number of hallucinatory and non-hallucinatory objects, respectively. 2.1 CO-OCCURRENCE AND SPURIOUS CORRELATION AMONG OBJECTS In the realm of multi-modal models, â co-occurrenceâ denotes the frequent appearance of specific objects. When the training data includes spurious co-occurring patterns among objects, language models can generate outputs based on these learned associations. However, these associations may not hold true for test examples, resulting in hallucinatory outputs. For example, â grassâ and â skyâ
2310.00754#5
2310.00754#7
2310.00754
[ "2308.14972" ]
2310.00754#7
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
frequently co-occur in the training data. The model falsely associates them and tends to generate â grassâ and â skyâ together even when only â grassâ is present in the context. In order to assess the influence of co-occurrence on object hallucination, we draw inspiration from (Biten et al., 2022)and introduce a Co-occurrence Score denoted as CoScore. For each image description s, the corresponding co-occurrence score CoScores is computed as the summation of co-occurrence degrees across all hallucinatory objects {os,1, . . . , os,nh }, which is defined as: nh Nr +nh S(05.4) NS(05,; Coseore, = 52 "FT 1S(ousd NS(044) a) |S(os,1)| + [S(0s,9)| §=1 j=1,05,; 405.1 2 # Preprint (a) Co-occurrence (b) Uncertainty (c) Object Position # Figure 1: Comparison between hallucinatory and non-hallucinatory captions under different factors. Here, S(·) denotes the set of all descriptions that mention a specific object, and |S(·)| represents the cardinality of this set. Based on the definition of CoScore, we compare the distribution of co-occurrence scores between hallucinatory and non-hallucinatory captions (please refer to Appendix A.1 for our experimental setting), As shown in Figure 1a, hallucinatory captions tend to exhibit higher co-occurrence scores, which suggests a stronger association between object hallucination and co-occurrence. 2.2 OBJECT UNCERTAINTY In language modeling, beam search (Holtzman et al., 2019; Freitag & Al-Onaizan, 2017) is em- ployed to predict words iteratively, introducing inherent uncertainty into the search process (illus- trative examples in Appendix D.1). This uncertainty is used as a measure of the modelâ s confidence in generating the next token, and can be related to hallucination, as objects with higher uncertainty are more likely to be inaccurate. Here, we aim to quantitatively investigate the potential relationship between the uncertainty associated with objects at each prediction step and the hallucinations.
2310.00754#6
2310.00754#8
2310.00754
[ "2308.14972" ]
2310.00754#8
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Concretely, we represent the probability of autoregressive decoding for each object token as p(os,i|s<k, x), where k denotes the positional index of object os,i. For each object os,i, the cor- responding Uncertainty Score is defined as: UnScores,i = â log p(os,i|s<i, x), (2) where a higher value of the uncertainty score indicates greater uncertainty. In Figure 1b, we perform a statistical analysis examining the connection between hallucination and object uncertainty (refer to Appendix A.1 for experimental details). Similar to the analysis of co-occurrence, hallucinatory objects are predominantly observed in the high-uncertainty range, while non-hallucinatory objects are more frequently generated in the certain range. 2.3 OBJECT POSITION IN GENERATED DESCRIPTIONS Interestingly, we also find a significant correlation between the object position in the generated descriptions and hallucination, where dominant hallucinations occur in the latter part of the descrip- tions. To validate it, we introduce the Positioning Score PoScore for each object os,i as follows: PoScores,i = Index(os,i) Ns , (3) where Index(os,i) signifies the position index of object os,i within the entire description. Based on the definition of PoScore, we conduct a analysis of the positions of hallucination in the descriptions, illustrated in Figure 1c (refer to Appendix A.1 for experimental details). These find- ings indicate that high-density areas of hallucinatory objects predominantly appear towards the end of the sequence. This pattern corroborates our observation that object hallucination frequently oc- curs in the latter segments of generated text. One plausible explanation for this observed trend is rooted in the autoregressive text generation process. In the initial stages, the model closely adheres to the semantic information of its input image, resulting in coherent beginnings. However, as the generation progresses, the accumulation of past hallucinatory information and emerging uncertain- ties may steer the model off-course, ultimately leading to a more pronounced emergence of object hallucination.
2310.00754#7
2310.00754#9
2310.00754
[ "2308.14972" ]
2310.00754#9
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
3 Preprint 2.4 THEORETICAL EXPLANATION After examining these empirical correlations, we proceed to offer theoretical insights to explain them (all proofs can be found in Appendix B). Specifically, we focus on predicting the i-th token, denoted as zi, and introduce a predictive function denoted as f . For each object k within a set of objects represented as [K], the function fk(s<i, x) signifies the predicted score associated with the k-th object. Here, K is defined as the total number of objects under consideration, and we use yk = 1 to denote the presence of the k-th object in an image and yk = â 1 otherwise. Furthermore, we make an assumption that fk(s<i, x) can be expressed as â ¨Ï k(s<i, x), βkâ ©, Ï k(s<i, x) | yk â ¼ N (yk · µâ k, Id) and Pr(yk = 1) = Pr(yk = â 1) = 1/2. For a training set D, the optimizer for the k-th class parameter βk trained on D is defined as: Ë Î²k = 1 (s<i,x,yi,k)â D yi,k Â·Ï k(s<i, x), where yi,k â {â 1, 1} represents |D| whether object k will occur at position i.
2310.00754#8
2310.00754#10
2310.00754
[ "2308.14972" ]
2310.00754#10
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Such a model and optimizer are commonly used in the theoretical analysis of deep learning models (Carmon et al., 2019; Zhang et al., 2022a). Co-occurrence. Based on this definition, we first consider co-occurrence. Without loss of general- ity, we assume that K = 2, and the first and second classes are frequently observed together, i.e., we observe (Ï 1(s<i, x), Ï 2(s<i, x)) among a fraction Ï 0 â (0, 1) of samples when both y1 and y2 are equal to 1. Here, to simplify the autoregressive process while maintaining sequential prediction man- ner, we consider using Ë f1 = â ¨Ï 1(s<i, x), Ë Î²1â © for the prediction of the first object, and in the second prediction, we model the information passed from the first information by â ¨Ï 1(s<i, x), Ë Î²1â ©, and con- sider Ë f2 = â ¨Ï 1(s<i, x), Ë Î²1â © + â ¨Ï 2(s<i, x), Ë Î²2â ©. The model outputs the second object if Ë f2(s<i, x) > 0. Under this setting, we consider two sampling schemes: (1) Each class is sampled according to the original training distribution; (2) Each class is sampled by setting Ï < Ï 0.
2310.00754#9
2310.00754#11
2310.00754
[ "2308.14972" ]
2310.00754#11
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
These two sampling schemes result in two subset of samples D(1), D(2) with the same size. Denote the classifiers trained on D(1) and D(2) by { Ë f (1) k }kâ {1,2} and { Ë f (2) k }kâ {1,2} respectively. Theorem 2.1 reflect reducing co-occurrence issue can lead to smaller test misclassification error Err(·). # Theorem 2.1 Suppose ⠥µâ We have kâ ¥2 â ª d, d/|D(k)| â κ for k â {1, 2} and universal constants κ > 0. Err( Ë f (2) 2 ) â ¤ Err( Ë f (1) 2 ). Uncertainty. We then turn our attention to object uncertainty. Here, we consider the two following sampling schemes: (1) Each class is sampled with equal probability 1/K; (2) Each class is sampled if the uncertainty score, defined as â
2310.00754#10
2310.00754#12
2310.00754
[ "2308.14972" ]
2310.00754#12
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
log(Ë pk), is above a certain threshold γ > 0. Here, Ë pk is (s<i,x,1) Ï (â ¨Ï k(s<i, x), Ë Î²kâ ©), where Dtr represents the training calculated as follows: Ë pk = 1 set. These two schemes result in two subsets of samples D(1) and D(2) with the same size. Given x and s<i, we make a prediction about whether the k-th object is present in the image using Ë
2310.00754#11
2310.00754#13
2310.00754
[ "2308.14972" ]
2310.00754#13
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
fk. Theorem 2.2 illustrates that sampling more certain objects can lead to a reduction in test error. Theorem 2.2 Suppose ⠥µâ probability at least 1 â o(1), kâ ¥2 â ª p, d/|D(k)| â κ for κ > 0 and k â [K]. We will have with 1 K 1 K p(2 p(1 RBM) < YB): k=1 k=1
2310.00754#12
2310.00754#14
2310.00754
[ "2308.14972" ]
2310.00754#14
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Object Position. The effect of object position on object hallucination is closely tied to error or pre- diction uncertainty accumulation in autoregressive models. This topic has been extensively studied in time series analysis, and several theoretical models have been established to investigate it (Hannan et al., 1989; Ing, 2007; Ding et al., 2017). # 3 LVLM HALLUCINATION REVISOR After thoroughly investigating the root causes of hallucinations, we formally introduces our remedy, LURE, that mitigates object hallucinations in large vision-language models. Inspired by denois- ing autoencoder (Vincent et al., 2008), which is designed to reconstruct clean data from corrupted input, we employ a hallucination revisor in our approach that aims to transform potentially LVLM- generated hallucinatory descriptions into accurate ones. The framework of LURE is depicted in Figure 2. In the subsequent sections, we will delve into the training and deployment processes of the hallucination revisor.
2310.00754#13
2310.00754#15
2310.00754
[ "2308.14972" ]
2310.00754#15
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
4 Preprint the sky anda = â > bright moon Generated description â ~. Bird Describe this image §r s._| This is an image of a person walking (£2) | along the beach with their . LVLMs | They appear to be looking out at the Co-occurrence ocean and the waves. The beach is sandy and there are some rocks in the water. There are some people on the beach Image some swimming and some playing in the lescription water. NBER is clear and blue and \ there arc SCOTT TTETORZN. 1: looks like a beautiful day on the beach, ">.
2310.00754#14
2310.00754#16
2310.00754
[ "2308.14972" ]
2310.00754#16
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
ChatGPT â The picture depicts a serene a lakeside view, with calm waters Position & surrounded by towering mountains. | uncertainty r4 Under revision! g ° The lake's surface reflects the sky ener = and a bright moon, creating an atmosphere of tranquility and serenity. Rectified description Masking This image captures a person strolling along the beach. The sandy beach is adorned with scattered rocks in the water. Several individuals can be seen on the beach, they are enjoying water activities.
2310.00754#15
2310.00754#17
2310.00754
[ "2308.14972" ]
2310.00754#17
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Standard description Hallucination correction Data preparation LURE Training Phase LURE Inference Phase â Training Figure 2: An illustration of LURE Framework: The orange-shaded section shows the training paradigm of LURE, where the black-bordered part represents the hallucinatory data generation phase, including introducing co-occurring objects and replacing either uncertain objects or objects in later positions in the descriptions. The purple-bordered part indicates the revisor training process, with the masking process that can be referenced in Alg. 1. The orange-shaded section illustrates an example in the inference phase of LURE. 3.1 TRAINING HALLUCINATION REVISOR In LURE, to train the hallucination revisor, we first curate a training dataset. Each example in this dataset consists of an image accompanied by a hallucinatory description, with the correct description serving as the output target. A significant challenge encountered during dataset curation lies in the generation of naturally-occurring hallucinatory descriptions. To overcome this challenge, LURE generates hallucinatory descriptions by modifying the accurate descriptions using GPT-3.5. These adjustments are guided by factors related to object hallucination, including co-occurrence, object uncertainty, and object position. In the following, we detail these modifications: Introducing Potential Co-Occurrence Objects. To create a more naturally occurring co- occurrence scenario, rather than relying on counting co-occurrence frequencies from any specific datasets that may contain bias co-occurrence records, LURE leverages GPT-3.5 to deduce and in- corporate objects that are most likely to co-occur in the scene into the original description. Reconsidering Uncertain Objects & Objects in Later Position in the Descriptions. Hallucina- tion is more prone to occur in objects with greater uncertainty and objects exist later in the descrip- tion. In this context, we anticipate that the revisor should place greater emphasis on and reevaluate these objects. To achieve this, we utilize string matching to replace objects with significant uncer- tainty and those located at the end of the description with the placeholder tag â [IDK]â .
2310.00754#16
2310.00754#18
2310.00754
[ "2308.14972" ]
2310.00754#18
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Here, to quantify object uncertainty in descriptions, we use the uncertainty values of noun tokens as a proxy. Token uncertainty is expressed as the entropy of each token, denoted as â log p(zi|s<i, x). We clas- sify tokens as uncertain objects if their corresponding uncertainty exceeds a threshold γ, and if they are identified as nouns. Like uncertainty, we determine the later objectâ s position using the condition Index(zi) â ¥ η â Length(s) and the thresold η. This approach enables the model to reassess and either replace â [IDK]â with a more appropriate object based on the image or remove it entirely. Using these modification strategies, for every accurate description, we provide GPT-3.5 with a list of potential co-occurrence objects, and a list of uncertain objects. We then prompt GPT-3.5 to generate the corresponding hallucinatory description using the prompts listed in Appendix A.3. Finally, we leverage the constructed hallucination dataset to fine-tune a LVLM and use it as revisor. Some cases of hallucinatory descriptions are in Appendix D.2. The training pipeline is illustrated in Alg. 1.
2310.00754#17
2310.00754#19
2310.00754
[ "2308.14972" ]
2310.00754#19
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
5 # Preprint Algorithm 1 Training LVLM Hallucination Revisor in LURE Require: training image set X ; groundtruth descriptions Y; LVLM M(·); uncertainty threshold γ; hallucina- tion revisor Rθ(·) with parameters θ; position threshold η 1: Use GPT-3.5 to construct hallucinatory description set Hold (see Appendix A.3 for more details) 2: Initialize the revisorâ s parameter θ and an empty set Hnew â {} 3: while not converged do 4: 5: 6: 7: 8: 9: 10: for each image x â X and the correpsonding hallucinatory description h â Hold do Generate description s = M(x) with object set Os for object os,i â Os do if os,i in h and â log p(os,i|M, x) â ¥ γ then if os,i in h and Index(os,i) â ¥ η â Length(h) then Put h into Hnew Update parameter θ with autoregressive loss L(Rθ(Hnew), Y) 3.2 DEPLOYING HALLUCINATION REVISOR In the inference stage, the trained revisor is employed to rectify the generated descriptions. Specif- ically, similar to the process of constructing hallucinated descriptions during the training phase, in the testing phase, we similarly integrate the placeholder tag â [IDK]â into the generated descriptions. This integration serves the purpose of enforcing the Revisor to reevaluate objects exhibiting high uncertainty or appearing later in the generated text. The inference pipeline is detailed in Alg. 2. Algorithm 2 Inference Pipline of LURE Require: test image xt; LVLM M(·); trained hallucination revisor Râ θ(·); uncertainty threshold γ, position threshold η 1: Generate description st = M(xt) with object set Ost 2: for object ost,i â Ost do 3: 4: 5: 6: 7: return Râ if â log p(object|M, x) â ¥ γ then Add placeholder tag â [IDK]â to st, i.e., st â Mask(st, ost,i) # if Index(ost,i) â ¥ η â Length(st) then Add placeholder tag â
2310.00754#18
2310.00754#20
2310.00754
[ "2308.14972" ]
2310.00754#20
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
[IDK]â to st, i.e., st â Mask(st, ost,i) # θ(st) # 4 EXPERIMENTS In this section, we evaluate the performance of LURE aiming to answer the following questions: (1) Can LURE effectively reduce object hallucination in LVLMs compared to other baselines? (2) Can the key factors weâ ve identified related to hallucinations in LVLMs benefit the training process of the revisor? (3) Is LURE sensitive to the revisorâ
2310.00754#19
2310.00754#21
2310.00754
[ "2308.14972" ]