doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2307.00184
30
In humans, agreeableness is strongly negatively related to aggression [8]. IPIP- NEO Agreeableness data for all 62B-parameter mod- els and larger showed good-to-excellent criterion va- lidity in their relation to tested aggression subscales taken from the BPAQ: Physical Aggression (PHYS), Verbal Aggression (VRBL), Anger (ANGR), and Hos- tility (HSTL). As depicted in Supplemental Figure 8b, model size rather than instruction fine-tuning is more related to the criterion validity of agreeableness mea- surements in LLMs. In humans, conscientiousness is meta-analytically related to the human values of achievement, conformity, and security [85]. Supple- mental Figure 8c shows how the conscientiousness measurements of all instruction fine-tuned PaLM vari- ants exhibited stronger evidence of criterion validity than those of the base model, PaLM 62B. Flan-PaLM 540B was the best performer by a small margin, with criterion correlations of 0.74, 0.73 and 0.59 for PVQ- RR Achievement (ACHV), Conformity (CONF), and Security (SCRT), respectively. Neuroticism. Human neuroticism is strongly posi- tively correlated with negative affect and moderately
2307.00184#30
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
31
Neuroticism. Human neuroticism is strongly posi- tively correlated with negative affect and moderately negatively correlated with positive affect [113]. IPIP- NEO Neuroticism data for all models, except those for the base model (PaLM 62B), showed excellent evi- dence of criterion validity in their relation to PANAS Positive Affect and Negative Affect subscale scores IPIP-NEO Neuroti- (see Supplemental Figure 8d). cism’s criterion validity, in terms of how the strengths and directions of its criterion correlations aligned with those observed among human data, increased with model size. Openness. Openness to experience in humans is empirically linked to creativity across multiple studies [100, 51]. Supplemental Figure 8e illustrates how the LLM-specific criterion validity of openness measure- ments is strongest for medium-sized, fine-tuned vari- ants of PaLM, with IPIP-NEO criterion correlations with SSCS Creative Self-Efficacy (CSE) and Creative Personal Identity (CPI) ranging from moderate (r = 0.59) to strong (r = 0.84). Notably, we observed neg- ative correlations between openness and creativity for PaLM 62B in contrast to those shown for Flan-PaLM 8B, the smallest model tested.
2307.00184#31
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
32
Relative improvements on the reliability and va- lidity of LLM personality measurements along the axes of model size and instruction fine-tuning reflected LLM performance on various benchmark tasks in lit- erature. Specifically, these improvements tracked ob- served increases in reading comprehension, question answering, and reasoning task performance of these models along these same axes [15, 16, 115, 116]. We hypothesize that the same mechanisms that drive LLM performance on language understanding tasks better also help them to meaningfully emulate human per- sonality traits in relation to semantically-related emo- tional and behavioral content, captured by our criterion validity tests. Appendix N further discusses this hy- pothesis and comparison to benchmark LLM results. # 3 Shaping Synthetic Personality Traits in LLMs Having found evidence of the reliability and construct validity of LLM personality measurements, we next 9
2307.00184#32
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
33
Having found evidence of the reliability and construct validity of LLM personality measurements, we next 9 considered our second research question: Can person- ality in LLMs be shaped reliably along desired dimen- sions? To answer this, we devised a novel prompt- ing methodology that shaped each synthetic personal- ity trait at nine intensity levels, using Likert-type lin- guistic qualifiers [61] and 104 trait adjectives, expand- ing upon Goldberg’s personality trait markers [32]. We evaluated LLM personality score changes in re- sponse to personality-shaped prompts across two ex- periments: single trait shaping and multiple trait shap- ing (see Appendix J for details). Our first experiment tested the abilities of LLMs to shape emulated Big Five dimensions of personality independently, target- ing single personality dimensions in isolation without prompting other dimensions. Our second experiment tested the abilities of LLMs to shape synthetic Big Five traits concurrently, specifying target levels of all five dimensions in every prompt set at the same time. As a more rigorous test of representational capacity, this experiment required the tested LLMs to disambiguate complex overlaps in personality domain information in parallel. The designed difficulty of the task was further underscored by extant human research indicat- ing that Big Five personality dimensions measured in questionnaires [84] and natural language [83] are not entirely orthogonal; they are weakly intercorrelated.
2307.00184#33
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
34
# 3.1 Methodology Overview To shape synthetic personality in LLMs, we began with established theory that salient descriptors of personal- ity are encoded in language, known as the lexical hy- pothesis [31]. We incorporated this knowledge into the prompt design, adapting Goldberg’s list of 70 bipolar adjectives [32] known to statistically capture the Big Five model of personality through human ratings and factor analysis. In this list, for example, the adjec- tives “silent” and “talkative” were found to mark rela- tively low and high levels of extraversion, respectively (see Table 3). We mapped these adjectives to each of the Big Five domains and 30 lower-order personality facets measured by the IPIP-NEO based on Goldberg’s original study [32]. Next, where we lacked coverage of a measured IPIP-NEO domain or facet, a trained
2307.00184#34
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
36
Figure 2: Ridge plots showing the frequency distributions of IPIP-NEO personality scores generated by Flan-PaLMChilla 62B as targeted prompts shape each of the Big Five domains to one of nine different levels. Each column of plots represents the observed scores on a specific IPIP-NEO subscale across all prompt sets (e.g., the leftmost column represents the scores observed on the IPIP-NEO Extraversion subscale). Each row depicts the observed personality scores across a single prompt set shaping a single specific Big Five domain to one of nine levels (e.g., the first row shows results of shaping extraversion). Each ridge plot comprises nine traces of personality score distributions in response to prompt sets targeting each level (e.g., traces labeled “3” represent the prompt set shaping a dimension to Level 3 of 9). The plots along the diagonal, from top-left to bottom-right, depict the the intended personality shaping results across all five prompt sets. 10 Table 3: Adapted trait marker examples for each Big Five domain. Supplemental Table 12 contains the full list.
2307.00184#36
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
37
10 Table 3: Adapted trait marker examples for each Big Five domain. Supplemental Table 12 contains the full list. Domain Facet Description Low Marker High Marker EXT EXT E2 - Gregariousness E5 - Excitement-Seeking silent unenergetic talkative energetic AGR AGR A3 - Altruism A4 - Cooperation unaltruistic uncooperative altruistic cooperative CON CON C3 - Dutifulness C4 - Achievement-Striving irresponsible lazy responsible hardworking NEU NEU N1 - Anxiety N6 - Vulnerability easygoing emotionally stable anxious emotionally unstable OPE OPE O2 - Artistic Interests O4 - Adventurousness uncreative uninquisitive creative curious psychometrician wrote additional adjectives, bringing our expanded list of trait adjectives to 104. Table 3 shows examples of trait adjectives for agreeableness and extraversion, while Supplemental Table 12 reports the full list. For more precise control of personality levels, we used linguistic qualifiers often used in Likert-type re- sponse scales [61] (e.g., “a bit,” “very,” “extremely”) to configure a target level for each adjective. The re- sulting prompt design, described in Appendix J.1, fa- cilitated granular shaping of a given Big Five trait at up to nine levels.
2307.00184#37
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
38
Across both shaping experiments, we only tested models that demonstrated at least “neutral to good” reliability in our Construct Validity experiments (Ta- ble 2): Flan-PaLM 8B, Flan-PaLM 62B, Flan-PaLM 540B, and Flan-PaLMChilla 62B. # 3.2 Evaluation Methodology In the single-trait shaping experiment (described in de- tail in Appendix J.2), our objective was to indepen- dently shape each Big Five trait at each of these nine levels. We benchmarked the success of independent shaping by 1) quantifying how strongly shifts in IPIP- NEO score distributions were related to shifts in tar- geted trait levels embedded in our prompt sets (i.e., through Spearman’s rank correlation coefficient ρ, Eq. (5)); and 2) inspecting the distance between personal- ity score distributions obtained in response to our most extreme prompt sets; specifically, the set of prompts we shaped to be the lowest possible levels of a trait (versus those shaped to be the highest possible levels of a trait) should result in distributions of scores that are farther away from each other.
2307.00184#38
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
39
In the multi-trait shaping experiment (described in detail in J.3), to more rigorously test model capaci- ties for attention, we aimed to concurrently shape all Big Five traits as high and low as possible. We bench- marked the success of concurrent shaping by distribu- tional distance, as defined above. # 3.3 Shaping Results We successfully shaped personality traits in LLMs in- dependently and concurrently, in single- and multi- trait shaping experiments, respectively, particularly in larger models. The results of both experiments are re- ported in greater detail in Appendix K. # 3.3.1 Single trait shaping Across all tested models, ordinal targeted levels of per- sonality very strongly correlated with observed IPIP- NEO scores (ρs ≥ 0.90; see Supplemental Table 13). Figure 2 visualizes this strong association, depicting 11
2307.00184#39
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
40
11 == Prompted Extremely Low == Prompted Extremely High IPIP-NEO EXT IPIP-NEO AGR IPIP-NEO CON IPIP-NEO NEU IPIP-NEO OPE 200 z 150 z 100 Ps 50 z 0 & a 200 2 15 yD 100 5 * Meds alli chthn.cdih 2 0 a Frequency Distribution of Response Scores a g 100 4 3 50 g z 0 g a a z 2 % 200 Fi = 100 9 ° pbrtnchaln ata MA Aron oil fin, hn = 1 2 3 4 51 2 3.4 51 2 3.4 S51 2 3 4 51 2 3°44 58 Observed Personality Scores
2307.00184#40
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
41
Figure 3: Ridge plots showing the effectiveness of tested models in concurrently shaping specific LLM personality traits, by distancing the frequency distribution of IPIP-NEO personality scores when prompted to be “extremely low” (Level 1) vs. “extremely high” (Level 9). Each column of plots represents the observed scores on a specific domain subscale across all prompt sets (e.g., the leftmost column represents the scores observed for IPIP-NEO Extraversion). Each row depicts the observed personality scores across all subscales for a specific model. Each ridge plot comprises two traces of personality score distributions. The red trace represents the response to prompt sets where the domain tested in the subscale (represented by the column) is set to “extremely low” trait level, and the other four domains are set to one of the two extreme levels equal number of times. Analogously, the blue trace represents the response when the subscale’s domain is set to “extremely high” trait level, and the other four domains are set to the two extremes in equal measure. The clear difference in distributions for low vs. high traits in all five dimensions, especially for Flan-PaLM 540B, indicates that the model is able to effectively shape all of the dimensions concurrently to their desired level, regardless of the trait level set for them individually. 12
2307.00184#41
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
42
12 how Flan-PaLMChilla 62B’s personality scores mono- tonically increased alongside prompted levels of a given Big Five trait. Notably, levels of unprompted traits remained relatively stable in response to shap- ing. For instance, the medians of Flan-PaLMChilla 62B’s openness scores remained near 3.00 when all other Big Five domains were shaped—see the right side of Figure 2. Similar patterns of stability were ob- served for extraversion and agreeableness. Conscien- tiousness and neuroticism scores fluctuated the most in response to prompts that did not target those domains, but the fluctuations did not reach the strength and di- rection of the score changes observed in the ridge plots of targeted traits (the plots on the diagonal, from top- left to bottom-right).
2307.00184#42
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
43
We also observed the ability of the tested models to disambiguate the prompted low-traits vs high-traits for each targeted dimension. This is evidenced in Sup- plemental Table 13 by the distances (∆s) between the medians of IPIP-NEO score distributions obtained in response to the lowest and highest leveled prompts. As model size increased, these distributions of scores moved farther away from each other as desired. Ad- ditionally, we found that compute-optimally-trained Flan-PaLMChilla 62B performed better at this dis- ambiguation compared to similarly sized Flan-PaLM 62B. Appendix K.1 discusses single-trait shaping results in greater detail. # 3.3.2 Multiple trait shaping When we concurrently set the prompted trait levels of each of the Big Five dimensions to one of “ex- tremely high” or “extremely low,” we observed that all the tested models were able to produce a distribution of response scores to the IPIP-NEO survey that had a discernible difference between the high and low levels. Figure 3 shows the distributions of LLM-synthesized personality when the models were prompted to exhibit extremely low (red) or extremely high (blue) levels of all dimensions in parallel. Distributional distance increased with model size, particularly for observed neuroticism, openness, and
2307.00184#43
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
44
Distributional distance increased with model size, particularly for observed neuroticism, openness, and conscientiousness scores. Our largest tested model, Flan-PaLM 540B, successfully shaped all Big Five personality dimensions concurrently and achieved lev- els of control similar to what was observed in the sin- gle trait shaping experiment. As shown in Supple- mental Table 14, Flan-PaLM 540B was able to consis- tently separate the medians by 2.53 on average across all dimensions, while the smaller Flan-PaLM 62B and Flan-PaLMChilla 62B did well on extraversion. Of all the models, Flan-PaLM 62B performed the best when prompted to exhibit the highest level of extraversion.
2307.00184#44
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
45
In the smaller Flan-PaLM 8B model, while targeted traits changed in score levels in response to prompts, score ranges were more restricted, indicating lower levels of control. Flan-PaLM 8B’s median scores on IPIP-NEO Agreeableness, for instance, shifted from 2.88 to only 3.52 when the model was prompted to simulate “extremely low” and “extremely high” levels of agreeableness (i.e., 1 vs. 9), respectively. When Flan-PaLM 8B was given the same extremely low and high prompts as in the first shaping experiment, the median difference between its level-1-prompted and level-9-prompted agreeableness scores (2.37 and 4.12, respectively) was 173% larger. Appendix K.2 dis- cusses the results in further detail. Both experiments illustrate how model size, and, in turn, capacity for attention [112], are key determinants of an LLM’s ability to express complex social traits in a controlled way. These findings have two implications for efforts to simulate social traits in LLMs. First, when LLMs are tasked with concurrently simulating a behavioral profile with five broad
2307.00184#45
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
46
These findings have two implications for efforts to simulate social traits in LLMs. First, when LLMs are tasked with concurrently simulating a behavioral profile with five broad components (e.g. Big Five), larger-sized quantized models do much better than their smaller counterparts who may not have the rep- resentational capacity. The number and composition of an LLM’s transformer layers and attention heads greatly affect its expressivity and ability to access lan- guage concepts it might have seen during pretraining (in-context learning) [49]. Larger models make more efficient use of this in-context information [11]. The PaLM models used here were configured such that the number of attention heads and layers scaled with model size (i.e., number of parameters) [15]; such scaling tracks model performance on natural language
2307.00184#46
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
48
Second, these findings suggest that both smaller and more optimized LLMs are also capable of simulating significant aspects of a complete and complex person- ality profile, compared to larger LLMs. Relatively smaller models trained longer on larger datasets dis- play similar (if not better) performance on language understanding tasks [49, 40]. This enhanced abil- ity of in-context learning (aided by specific attention mechanism changes) is more pronounced for smaller models than for larger ones. Our results similarly show that relatively smaller models with or without compute-optimal training may have sufficient ability to emulate specific dimensions of a broader multi- dimensional personality profile. When instructed to independently shape its levels of agreeableness, for instance, Flan-PaLMChilla 62B performed compara- bly to Flan-PaLM 540B, a substantially larger model, in terms of our distributional distance metric (Sup- plemental Table 13). Further, in the more complex concurrent shaping task, Flan-PaLM 62B performed similarly to Flan-PaLM 540B in concurrently shap- ing its levels of agreeableness; it indeed outperformed Flan-PaLM 540B in one instance, better simulating extremely low and high desired levels of extraversion (Figure 3; see also Supplemental Table 14).
2307.00184#48
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
49
In sum, our results emphasize that the model scaling drives more meaningful syntheses of personality traits in LLMs, while simultaneously highlighting that scal- ing is not a strict requirement for LLM performance improvements in this domain. # 4 LLM Personality Traits in Real- World Tasks So far we reported the results of validating personal- ity measurements in LLMs through psychometric test- ing and analysis. However, we also sought to ad- dress possible concerns that the construct validity of LLM personality measurements—evidenced by LLM 14 responses to other psychometric tests—could be an ar- tifact of common method bias [88]. In other words, our questionnaire-based signals of LLM personality were validated by responses to other questionnaires that have not undergone the same LLM-specific con- struct validation process. To address this risk of com- mon method bias, we further scrutinized the construct validity of personality measurements in LLMs in a real-world use case in two ways: 1) by evaluating the ability of survey-based signals of LLM personality to reflect levels of personality expressed in a downstream generative task of creating social media posts; and 2) by investigating the effects of LLM personality shap- ing on the outputs of this task. # 4.1 Methodology Overview
2307.00184#49
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
50
# 4.1 Methodology Overview The structured prompts that independently shaped LLM personality domains at nine levels (introduced in Section 3.1, described in detail in Appendix J.2) were adapted to instruct Flan-PaLM 540B to gener- ate 225,000 social media status updates, i.e., 100 up- dates for 2,250 simulated participant prompt sets used in Section 3. The personality observed in the sta- tus updates generated for each simulated participant was then rated using the Apply Magic Sauce (AMS) API [55], a validated personality prediction service for open-ended text. The chosen task was designed to re- flect adequate levels of realism, complexity, and do- main relevance for evaluating the LLM. Appendix L details the task design and rationale.
2307.00184#50
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
51
To evaluate how psychometric tests may reflect personality levels in downstream LLM tasks, we computed Pearson’s correlations (rs; Eq. (4)) between Flan-PaLM 540B’s IPIP-NEO personality scores and (AMS-derived) generated-text-based per- sonality scores (both sets of scores were linked by the same 2,250 personality shaping prompts used in Sec- tion 3). Next, we statistically verified the effectiveness of personality shaping by computing Spearman’s rank correlations (ρs; Eq. (5)) between prompted ordinal levels of personality and (continuous) personality lev- els observed in the model’s generated text. At least a moderate correlation between survey-based and linRespondent ® Flan-PaLM 540B ™ Human Exr AGR Con NEU Measure 0.8 0.6 0.4 Pearson's r 0.2 i} OPE
2307.00184#51
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
52
Figure 4: Ability of Flan-PaLM 540B’s psychometric test data (blue) to accurately predict personality levels in its shaped generated text outputs (social media status updates), compared to human baselines reported (red) in previous work [83]. LLM IPIP-NEO scores outper- formed human IPIP-NEO scores in predicting text-based levels of personality, indicating that LLM personality test responses accurately capture latent LLM personality signals manifested in downstream behavior. All LLM correlations are statistically significant at p < .0001. n = 2, 250. guistic estimates of personality in LLMs (as demon- strated in previously reported human data [83]) would demonstrate that a survey-based measure of personal- ity accurately predicts LLM-synthesized personality in subsequent tasks such as text generation. # 4.2 Real-World Tasks Results
2307.00184#52
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
53
# 4.2 Real-World Tasks Results Psychometric tests of LLM personality robustly pre- dicted personality in LLM task behavior, expressed in 225,000 social media status updates generated by Flan-PaLM 540B. Flan-PaLM 540B’s IPIP- NEO scores strongly correlated with language-based (AMS-derived) personality levels observed in model- generated text, shown in Figure 4. In particular, the average convergent r between survey- and generated- language-based measures of all five dimensions was 0.55. This observed convergence exceeded established convergence between survey- and language-based lev- els of personality reported for humans (avg. r = 0.38) [83]. Moreover, our prompting technique was highly Table 4: Spearman’s rank correlation coefficients (ρ) between ordinal targeted levels of personality and language-based (Apply Magic Sauce API) personality scores for Flan-PaLM 540B. Prompted levels of per- sonality are strongly related to personality observed in synthetically-generated social media status updates for all Big Five traits, except openness—which is moderately cor- related with target levels—demonstrating that LLM person- ality can be verifiably shaped in generative tasks. All corre- lations are statistically significant at p < 0.0001; n = 450 per targeted domain.
2307.00184#53
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
54
Targeted Trait Spearman’s ρ Extraversion Agreeableness Conscientiousness Neuroticism Openness 0.74 0.77 0.68 0.72 0.47 effective at shaping personality levels in LLM- generated text. On average, prompted trait levels were moderately-to-strongly correlated with personal- ity levels observed in Flan-PaLM 540B’s social media status updates (avg. ρ = 0.68; see Table 4). Prompted levels of openness moderately correlated with gener- ated text levels of openness in this model.
2307.00184#54
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
55
To illustrate the practical implications of the per- sonality shaping methodology, we present wordclouds to gain an insights into model-generated language that users would see. Figure 5a shows the most fre- quent words in synthetic social media updates when Flan-PaLM 540B simulated extremely low levels of neuroticism (i.e., extremely high emotional stability). LLM-generated language in response to this prompt- ing was characterized by positive emotion words, such as “happy,” “relaxing,” “wonderful,” “hope,” and “enjoy.” In contrast, the most frequent words from simulating extremely high levels of neuroticism— “hate,” “depressed,” “annoying,” “stressed,” “ner- vous,” “sad”—reflected negatively-charged emotional content (Figure 5b). Supplemental Table 15 provides examples for all personality domains. This experiment demonstrated that LLM-generated language was sim- ilar to human language observed in previous studies 15 to friend ish pea think ing movie em Qeeonne “pe digner cuuexcite fenjoying’ ook Lif schools: shopping e brother Ile Take husband g FeAl re ithaes r Ol park matching TV
2307.00184#55
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
56
3 Fa cnt pe better watching peess CONL boss.tou gg ben aE 5 car thinking: Pek talk (a) “Extremely Low” Prompted Neuroticism “&) (b) “Extremely High” Prompted Neuroticism Figure 5: Word clouds showing some of the highest frequency words used in social media updates generated by Flan-PaLM 540B when prompted to simulate a) “extremely low” levels of neuroticism (i.e., highest emotional stability); and b) “extremely high” levels of neuroticism (i.e., lowest emotional stability). Supplemental Figure 9 shows word clouds for the remaining Big Five dimensions. assessing personality in social media data [83], further confirming the construct validity of our LLM person- ality measurements. # 5 Discussion
2307.00184#56
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
57
assessing personality in social media data [83], further confirming the construct validity of our LLM person- ality measurements. # 5 Discussion The goal of this work was to contribute a principled methodology for reliably and validly measuring syn- thetic personality in LLMs and use the same validated methods to shape LLM personality expression. We provided a complete methodology to 1) quantify per- sonality traits that may be perceived by humans in LLM outputs through psychometric testing; 2) verify that psychometric tests of LLM personality traits are empirically reliable and valid; and 3) provide mecha- nisms to increase or decrease levels of specific LLM personality traits. The application of this methodol- ogy demonstrates that psychometric tests provide re- liable and valid measurements of synthetic personal- ity for sufficiently-scaled and instruction-tuned LLMs, highlighting possible mechanisms that allow LLMs to encode and express complex social phenomena (see Appendix N). # 5.1 Limitations and Future Work Personality traits of other LLMs One of the core contributions of this work is an understanding of how simulating personality in language models is affected by model size and training procedure. We focused on the PaLM model variants for pragmatic reasons, but the presented methodology for administering psy- chometric surveys is model-agnostic and is applicable to any decoder-only architecture model, such as GPT [39].
2307.00184#57
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
58
Psychometric test selection and validation This work also contributes a principled way to establish the reliability and validity of psychometric personal- ity tests in the LLM context. However, this work may be biased by its selection of psychometric tests; some assessments may show better LLM-specific psycho- metric properties than others. We attempted to mit- igate selection bias by administering personality as- sessments of different lengths (300 vs. 44 items) and distinct theoretical traditions (questionnaire vs. lexical [102]). Future work could administer different per- sonality tests (e.g., the HEXACO Personality Inven- tory, which uses a cross-cultural six-factor taxonomy of personality [58]), develop personality tests tailored for LLMs to obtain more accurate trait measurements, and validate personality measurements with additional 16 external criteria and downstream tasks.
2307.00184#58
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
59
16 external criteria and downstream tasks. Monocultural bias This work contributes evidence that at least some LLMs exhibit personality traits that approximate human standards of reliability and valid- ity. However, the LLMs tested here were primarily trained on language data originating from Western Eu- ropean and North American users [15]. While these LLMs perform well on natural language processing benchmarks in multiple languages, the models in this work were assessed exclusively with English-language psychometric tests. However, most of the tests used in this work have non-English translations validated in cross-cultural research that merit future use in LLM research. Similarly, while the Big Five model of per- sonality has well established cross-cultural generaliz- ability [94], some non-Western cultures express ad- ditional personality dimensions that do not exist in top-down personality taxonomies [38]. Those dimen- sions may be better represented in culture-specific (i.e., idiographic) approaches to measuring personal- ity in LLMs.
2307.00184#59
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
60
Evaluation settings Unlike conventional human questionnaire administration, under the presented methodology the LLMs did not consider responses to prior questionnaire items; all items were presented and scored as independent events. We chose this method to ensure model response variance was not impacted by item ordering effects or length of the context (prompt) provided to the model for inference, and could be iso- lated to controlled variations in our prompts. LLM performance on natural language tasks is known to de- crease as length of input prompts grow, and is most affected by the content at either the beginning or to- wards the end of long inputs [63]. Non-instruction- tuned LLMs are known to show biased attention for more recent tokens (i.e., the end of inputs), especially when evaluating next-word prediction of contiguous text [105]. This uneven attention compounds approx- imation errors in longer contexts [89], such as those necessitated by 300-item IPIP-NEO used here, moti- vating our use of independent item administration. On the other hand, psychometric test data quality for hu- mans can be affected by test length and item order. Our method avoids some sources of measurement er17
2307.00184#60
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
61
ror inherent to human administration, while being sub- ject to others inherent to machine administration. Ad- ditionally, model responses to the multi-choice ques- tions were scored rather than generated to ensure re- producibility. LLMs are more commonly used to gen- erate text rather than score continuations, and that gen- erative mode of inference might provide a more real- istic estimate of a model’s behavior. # 5.2 Broader Implications
2307.00184#61
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
62
# 5.2 Broader Implications Responsible AI alignment The ability to probe and shape LLM personality traits is pertinent to the open problem of responsible AI alignment [28] and harm mitigation [118]. As a construct validated auditing tool [76], our methodology can be used to proactively predict toxic behavioral patterns in LLMs across a broad range of downstream tasks, potentially guid- ing and making more efficient responsible AI evalu- ation and alignment efforts prior to deployment. Sim- ilarly, shaping levels of specific traits away from toxic or harmful language output (e.g., very low agreeable- ness, high neuroticism) can make interactions with LLMs safer and more inclusive. The values and moral foundations present in LLMs could be made to better align with desired human values by tuning for corre- sponding personality traits, since personality is meta- analytically linked to human values [26]. More di- rectly, the presented methodology can be used to rigor- ously quantify efforts towards human value alignment in LLMs by establishing the construct validity of hu- man value questionnaires in LLMs.
2307.00184#62
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
63
Implications for users Users could enjoy cus- tomized interactions with LLMs tailored to their spe- cific personality traits, toward enhanced engagement. LLMs with customized personality traits can enable applications where a chatbot’s personality profile is adapted to the task. Our methodology for establish- ing construct validity can be used as an evaluation step in the process of developing LLM-powered user- facing chatbots with safer and more consistent per- sonality profiles. Furthermore, the personality shap- ing methodology can be used for chatbot adversarial testing to probe another LLM’s responses and to train users on how to handle adversarial situations. # 5.3 Ethical Considerations
2307.00184#63
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
64
users on how to handle adversarial situations. # 5.3 Ethical Considerations Personalized LLM persuasion Adapting the person- ality profile of a conversational agent to that of a user can make the agent more effective at encouraging and supporting behaviors [107]. Personality matching has also been shown to increase the effectiveness of real- life persuasive communication [67]. However, the same personality traits that contribute to persuasive- ness and influence could be used to encourage un- desirable behaviors. As LLM-powered chatbots be- come ubiquitous, their potential to be used for harm- ful persuasion of individuals, groups, and even soci- ety at large must be taken seriously. Having scientif- ically vetted methods for LLM personality measure- ment, analysis, and modification, such as the method- ology our work presents, increases the transparency and predictability of such LLM manipulations. Per- suasive techniques are already ubiquitous in society, so stakeholders of AI systems must work together to sys- tematically determine and regulate AI use; this work aims to inform such efforts.
2307.00184#64
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
65
Anthropomorphized AI Personalization of conver- sational agents has documented benefits [52], but there is a growing concern about harms posed by the an- thropomorphization of AI. Recent research suggests that anthropomorphizing AI agents may be harmful to users by threatening their identity, creating data pri- vacy concerns, and undermining well-being [111]. Be- yond qualitative probing explorations, our work defini- tively establishes the unexpected ability of LLMs to appear anthropomorphic, and to respond to psychome- tric tests in ways consistent with human behavior, be- cause of the vast amounts of human language training data. The methods we presented can be used to inform responsible investigation of anthropomorphized AI. Detection of incorrect LLM information LLMs can generate convincing but incorrect responses and content [118]. One of the methods to determine if a text containing a world fact is generated by an LLM (and hence might require vetting) is to use the pre- dictable traits—lack of human-like personality, and 18 linguistic features in the LLM language [106]. How- ever, with personality shaping, that method may be rendered ineffective, thereby making it easier for bad actors to use LLMs to generate misleading content. This problem is part of the larger alignment challenge and grounding of LLMs—areas of growing focus of investigation in both academia and industry. # 6 Conclusion
2307.00184#65
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
66
# 6 Conclusion The display of synthetic personality in LLM outputs is well-established, and personality assessment is crit- ically important for responsible deployment of LLMs to the general public. Since measurements of LLM personality to date have not yet been rigorously vali- dated, this work presented a principled methodology for a comprehensive quantitative analysis of person- ality traits exhibited in personality questionnaire re- sponses and text generated by widely-used LLMs, by applying standards from psychometrics. We applied the methodology to models of various sizes and con- clusively showed that psychometric tests of LLM per- sonality demonstrate reliability and construct valid- ity for larger and instruction fine-tuned models. We presented a novel methodology for shaping LLM- synthesized personality along desired dimensions us- ing Goldberg’s personality trait markers and Likert- type linguistic qualifiers, to resemble specific person- ality profiles. Additionally, we discussed the ethi- cal implications of shaping LLM personality traits. This work has important implications for AI align- ment and harm mitigation, and informs ethics discus- sions concerning AI anthropromorphization, personal- ization, and potential misuse. # References
2307.00184#66
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
67
# References [1] M. Abdulhai, C. Crepy, D. Valter, J. Canny, and N. Jaques. Moral foundations of large language models. In AAAI 2023 Workshop on Representation Learning for Responsible Human-Centric AI, 2022. [2] G. Allport. Personality: A Psychological Interpretation. H. Holt, 1937. [3] American Educational Research Association, American Psychological Association, and Na- in Educa- tional Council on Measurement tion, editors. Standards for educational and psychological testing. American Educational Research Association, Lanham, MD, Mar. 2014. [4] A. E. R. Association, A. P. Association, N. C. on Measurement in Education, J. C. on Stan- dards for Educational, and P. Testing. Standards for Educational and Psychological Testing. American Educational Research Association, 2014. [5] D. Bahdanau, K. Cho, and Y. Bengio. Neu- ral machine translation by jointly learning to In Y. Bengio and Y. Le- align and translate. Cun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings, 2015.
2307.00184#67
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
68
[6] A. T. Beck, R. A. Steer, and M. G. Carbin. Psychometric properties of the Beck Depres- sion Inventory: Twenty-five years of evalua- tion. Clinical Psychology Review, 8(1):77–100, 1988. [7] E. M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell. On the dangers of stochas- tic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 610–623, New York, NY, USA, 2021. Association for Computing Ma- chinery. [8] B. A. Bettencourt and C. Kernahan. A meta- analysis of aggression in the presence of violent cues: Effects of gender differences and aversive provocation. Aggressive Behavior, 23(6):447– 456, 1997. 19 [9] W. Bleidorn, P. L. Hill, M. D. Back, J. J. Denis- sen, M. Hennecke, C. J. Hopwood, M. Jokela, C. Kandler, R. E. Lucas, M. Luhmann, et al. The policy relevance of personality traits. American Psychologist, 74(9):1056, 2019.
2307.00184#68
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
69
[10] R. L. Boyd and J. W. Pennebaker. Language- based personality: A new approach to person- ality in a digital world. Current Opinion in Behavioral Sciences, 18:63–68, 2017. Big data in the behavioural sciences. [11] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Win- ter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Had- sell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, vol- ume 33, pages 1,877–1,901. Curran Associates, Inc., 2020.
2307.00184#69
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
70
[12] D. T. Campbell and D. W. Fiske. Convergent and discriminant validation by the multitrait- multimethod matrix. Psychological Bulletin, 56(2):81, 1959. Identifying and manipulating the personality traits of language models. CoRR, abs/2212.10276, 2022. [14] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. H. Guss, J. Tang, A. Nichol, A. Paino, N. Tezak,
2307.00184#70
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
72
[15] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Brad- bury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robin- son, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R.
2307.00184#72
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
74
[16] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, Y. Li, X. Wang, M. De- hghani, S. Brahma, A. Webson, S. S. Gu, Z. Dai, M. Suzgun, X. Chen, A. Chowdhery, A. Castro-Ros, M. Pellat, K. Robinson, D. Val- ter, S. Narang, G. Mishra, A. Yu, V. Zhao, Y. Huang, A. Dai, H. Yu, S. Petrov, E. H. Chi, J. Dean, J. Devlin, A. Roberts, D. Zhou, Q. V. Le, and J. Wei. Scaling instruction-finetuned language models. CoRR, abs/2210.11416, 2022. [17] L. A. Clark and D. Watson. Constructing valid- ity: Basic issues in objective scale development. Psychological Assessment, 7(3):309, 1995. 20 [18] L. A. Clark and D. Watson. Constructing validity: New developments in creating ob- jective measuring instruments. Psychological Assessment, 31(12):1412, 2019.
2307.00184#74
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
75
[18] L. A. Clark and D. Watson. Constructing validity: New developments in creating ob- jective measuring instruments. Psychological Assessment, 31(12):1412, 2019. [19] P. T. Costa, Jr. and R. R. McCrae. Revised NEO Personality Inventory (NEO PI-R) and NEO Five-Factor Inventory (NEO-FFI): Professional Manual. Psychological Assessment Resources, Odessa, FL, 1992. [20] L. J. Cronbach. Coefficient alpha and the Psychometrika, internal structure of tests. 16(3):297–334, 1951. [21] S. Crouse, G. Elbaz, and C. Malamud. Common Crawl Foundation., 2008. Toward a theory of the Big Five. Psychological Inquiry, 21(1):26–33, 2010. [23] C. G. DeYoung, R. E. Beaty, E. Genc¸, R. D. Latzman, L. Passamonti, M. N. Servaas, A. J. Shackman, L. D. Smillie, R. N. Spreng, E. Vid- ing, et al. Personality neuroscience: An emerg- ing field with bright prospects. Personality Science, 3:1–21, 2022.
2307.00184#75
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
76
[24] C. G. DeYoung, J. B. Hirsh, M. S. Shane, X. Pa- pademetris, N. Rajeevan, and J. R. Gray. Test- ing predictions from personality neuroscience: Brain structure and the Big Five. Psychological Science, 21(6):820–828, 2010. [25] J. D. Evans. Straightforward Statistics for the Behavioral Sciences. Brooks/Cole Publishing Co, 1996. [26] R. Fischer and D. Boer. Motivational basis of personality traits: A meta-analysis of value- personality correlations. Journal of Personality, 83(5):491–510, 2015. [27] I. Gabriel. Artificial intelligence, values, and alignment. Minds and machines, 30(3):411– 437, 2020. [28] I. Gabriel and V. Ghazavi. The Chal- From In lenge Fairer Algorithms The Oxford Handbook of Digital Ethics. Oxford University Press. of Value Alignment: to AI Safety. [29] F. Galton. Measurement of character. Fortnightly Review, 36:179–85, 1884.
2307.00184#76
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
77
[29] F. Galton. Measurement of character. Fortnightly Review, 36:179–85, 1884. [30] L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite, N. Nabeshima, S. Presser, and C. Leahy. The pile: An 800gb dataset of diverse text for lan- guage modeling. CoRR, abs/2101.00027, 2020. [31] L. R. Goldberg. Language and individual dif- ferences: The search for universals in personal- ity lexicons. Review of Personality and Social Psychology, 2(1):141–165, 1981. [32] L. R. Goldberg. The development of markers for the Big-Five factor structure. Psychological Assessment, 4(1):26–42, 1992. [33] L. R. Goldberg. A broad-bandwidth, public domain, personality inventory measuring the lower-level facets of several Five-Factor mod- els. Personality Psychology in Europe, 7(1):7– 28, 1999.
2307.00184#77
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
78
[34] A. K. Goodboy and M. M. Martin. Omega over alpha for reliability estimation of unidi- mensional communication measures. Annals of the International Communication Association, 44(4):422–439, 2020. [35] L. Guttman. A basis for analyzing test-retest re- liability. Psychometrika, 10(4):255–282, 1945. In- vestigating emergent capabilities and behavior in large language models using psychological methods. CoRR, abs/2303.13988, 2023. [37] C. Hare and K. T. Poole. Psychometric Methods in Political Science, chapter 28, pages 901–931. John Wiley & Sons, Ltd, 2018. 21 [38] S. J. Heine and E. E. Buchtel. Personality: The universal and the culturally specific. Annual Review of Psychology, 60(1):369–394, 2009. [39] D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt. Mea- suring massive multitask language understand- ing. In International Conference on Learning Representations, 2021.
2307.00184#78
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
79
[40] J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. de las Casas, L. A. Hendricks, J. Welbl, A. Clark, T. Hennigan, E. Noland, K. Milli- can, G. van den Driessche, B. Damoc, A. Guy, S. Osindero, K. Simonyan, E. Elsen, O. Vinyals, J. W. Rae, and L. Sifre. An empirical anal- ysis of compute-optimal large language model training. In A. H. Oh, A. Agarwal, D. Bel- grave, and K. Cho, editors, Advances in Neural Information Processing Systems, 2022. [41] A. Z. Jacobs. Measurement as governance in and for responsible AI. CoRR, abs/2109.05658, 2021.
2307.00184#79
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
80
[41] A. Z. Jacobs. Measurement as governance in and for responsible AI. CoRR, abs/2109.05658, 2021. [42] J. Jang, S. Ye, and M. Seo. Can large lan- a guage models truly understand prompts? In A. Al- case study with negated prompts. balak, C. Zhou, C. Raffel, D. Ramachandran, S. Ruder, and X. Ma, editors, Proceedings of The 1st Transfer Learning for Natural Language Processing Workshop, volume 203 of Proceedings of Machine Learning Research, pages 52–62. PMLR, 03 Dec 2023. [43] K. Jankowsky, G. Olaru, and U. Schroeders. Compiling measurement invariant short scales in cross-cultural personality assessment using ant colony optimization. European Journal of Personality, 34(3):470–485, 2020. [44] G. Jiang, M. Xu, S.-C. Zhu, W. Han, C. Zhang, and Y. Zhu. Evaluating and inducing person- ality in pre-trained language models. CoRR, abs/2206.07550, 2023.
2307.00184#80
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
81
[45] H. Jiang, X. Zhang, X. Cao, and J. Kabbara. Personallm: Investigating the ability of gpt-3.5 to express personality traits and gender differ- ences. CoRR, abs/2305.02547, 2023. [46] Z. Jiang, J. Araki, H. Ding, and G. Neubig. How can we know when language models know? on the calibration of language models for ques- tion answering. Transactions of the Association for Computational Linguistics, 9:962–977, 09 2021. [47] O. P. John, L. P. Naumann, and C. J. Soto. Paradigm shift to the integrative Big Five trait taxonomy: History, measurement, and con- ceptual issues. In O. P. John, R. W. Rob- bins, and L. A. Pervin, editors, Handbook of Personality: Theory and Research, pages 114– 158. The Guilford Press, 2008.
2307.00184#81
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
82
[48] O. P. John and S. Srivastava. The Big Five trait taxonomy: History, measurement, and theoreti- cal perspectives. In L. A. Pervin and O. P. John, editors, Handbook of Personality: Theory and Research, volume 2, pages 102–138. Guilford Press, New York, 1999. [49] J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei. Scaling laws for neu- ral language models. CoRR, abs/2001.08361, 2020. [50] S. R. Karra, S. T. Nguyen, and T. Tulabandhula. Estimating the personality of white-box lan- guage models. CoRR, abs/2204.12000, 2023. [51] M. Karwowski, I. Lebuda, E. Wisniewska, and J. Gralewski. Big Five personality traits as the predictors of creative self-efficacy and creative personal identity: Does gender matter? The Journal of Creative Behavior, 47(3):215–232, 2013.
2307.00184#82
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
83
[52] A. B. Kocaballi, S. Berkovsky, J. C. Quiroz, L. Laranjo, H. L. Tong, D. Rezazadegan, A. Bri- atore, and E. Coiera. The personalization of 22 conversational agents in health care: System- atic review. J Med Internet Res, 21(11):e15360, Nov 2019. [53] T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa. Large language models are zero-shot reasoners. In S. Koyejo, S. Mo- hamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 22199– 22213. Curran Associates, Inc., 2022. [54] M. Kosinski, S. C. Matz, S. D. Gosling, V. Popov, and D. Stillwell. Facebook as a re- search tool for the social sciences: Opportu- nities, challenges, ethical considerations, and practical guidelines. American Psychologist, 70(6):543, 2015.
2307.00184#83
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
84
and T. Grae- Private traits and attributes are pre- pel. dictable from digital records of human behav- ior. Proceedings of the National Academy of Sciences, 110(15):5802–5805, 2013. [56] R. Kotov, W. Gamez, F. Schmidt, and D. Wat- son. Linking “big” personality traits to anxiety, depressive, and substance use disor- ders: A meta-analysis. Psychological Bulletin, 136(5):768, 2010. [57] J. A. Krosnick and D. F. Alwin. An evalua- tion of a cognitive theory of response-order ef- fects in survey measurement. Public Opinion Quarterly, 51(2):201–219, 1987. [58] K. Lee and M. C. Ashton. Psychometric prop- erties of the HEXACO Personality Inventory. Multivariate Behavioral Research, 39(2):329– 358, 2004. [59] X. Li, Y. Li, S. Joty, L. Liu, F. Huang, L. Qiu, and L. Bing. Does GPT-3 demonstrate psy- chopathy? evaluating large language mod- els from a psychological perspective. CoRR, abs/2212.10529, 2023.
2307.00184#84
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
85
[60] P. P. Liang, C. Wu, L.-P. Morency, and R. Salakhutdinov. Towards understanding and mitigating social biases in language models. CoRR, abs/2106.13219, 2021. [61] R. Likert. A Technique for the Measurement of Attitudes. Number 136–165. Archives of Psy- chology, 1932. [62] S. Lin, J. Hilton, and O. Evans. TruthfulQA: Measuring how models mimic human false- hoods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3,214–3,252, Dublin, Ireland, May 2022. As- sociation for Computational Linguistics. [63] N. F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilacqua, F. Petroni, and P. Liang. Lost in the middle: How language models use long contexts. CoRR, abs/2307.03172, 2023.
2307.00184#85
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
86
[64] R. K. Mahabadi, L. Zettlemoyer, J. Hender- son, M. Saeidi, L. Mathias, V. Stoyanov, and M. Yazdani. Perfect: Prompt-free and effi- cient few-shot learning with language models. CoRR, abs/2204.01172, 2022. [65] K. Mahowald, A. A. Ivanova, I. A. Blank, N. Kanwisher, J. B. Tenenbaum, and E. Fe- dorenko. Dissociating language and thought in large language models: A cognitive perspec- tive. CoRR, abs/2301.06627, 2023. [66] M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. Building a large anno- tated corpus of English: The Penn Tree- bank. Computational Linguistics, 19(2):313– 330, 1993. [67] S. Matz, M. Kosinski, D. Stillwell, and G. Nave. Psychological framing as an effective approach to real-life persuasive communication. ACR North American Advances, 2017. [68] R. R. McCrae and A. Terracciano. Universal features of personality traits from the observer’s 23
2307.00184#86
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
87
[68] R. R. McCrae and A. Terracciano. Universal features of personality traits from the observer’s 23 perspective: Data from 50 cultures. Journal of Personality and Social Psychology, 88(3):547, 2005. theory: A unified Test treatment. Lawrence Erlbaum Associates Pub- lishers, 1999. [70] S. Merity, C. Xiong, and Pointer sentinel mixture mod- In International Conference on Learning J. Bradbury, R. Socher. els. Representations, 2017. [71] S. Messick. Standards of validity and the va- lidity of standards in performance asessment. Educational Measurement: Issues and Practice, 14(4):5–8, 1995. [72] S. Messick. Test validity: A matter of conse- quence. Social Indicators Research, 45:35–44, 1998. [73] S. Min, X. Lyu, A. Holtzman, M. Artetxe, M. Lewis, H. Hajishirzi, and L. Zettlemoyer. Rethinking the role of demonstrations: What makes in-context CoRR, abs/2202.12837, 2022.
2307.00184#87
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
88
[74] M. Miotto, N. Rossberg, and B. Kleinberg. an exploration of personal- Who is GPT-3? ity, values and demographics. In Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS), pages 218–227, Abu Dhabi, UAE, Nov. 2022. Association for Computational Lin- guistics. [75] M. Mitchell and D. C. Krakauer. The debate over understanding in ai’s large language mod- els. Proceedings of the National Academy of Sciences, 120(13):e2215907120, 2023. [76] J. M¨okander, J. Schuett, H. R. Kirk, and L. Floridi. Auditing large language models: A three-layered approach. AI and Ethics, pages 1–31, 2023. [77] D. Nettle. The evolution of personality varia- tion in humans and other animals. American Psychologist, 61(6):622, 2006. [78] U. of Cambridge Psychometrics Centre. Apply Magic Sauce API. [79] OpenAI. ChatGPT, 2022. [80] OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023.
2307.00184#88
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
89
[79] OpenAI. ChatGPT, 2022. [80] OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023. [81] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, and R. Lowe. Training language models to follow instructions with human feedback. In S. Koyejo, S. Mohamed, A. Agarwal, D. Bel- grave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, vol- ume 35, pages 27,730–27,744. Curran Asso- ciates, Inc., 2022.
2307.00184#89
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
90
[82] D. Paperno, G. Kruszewski, A. Lazaridou, N. Q. Pham, R. Bernardi, S. Pezzelle, M. Ba- roni, G. Boleda, and R. Fern´andez. The LAM- BADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1,525–1,534, Berlin, Germany, Aug. 2016. Association for Computational Lin- guistics. [83] G. Park, H. A. Schwartz, J. C. Eichstaedt, M. L. Kern, M. Kosinski, D. J. Stillwell, L. H. Un- gar, and M. E. Seligman. Automatic personal- ity assessment through social media language. Journal of Personality and Social Psychology, 108(6):934, 2015. [84] H. Y. Park, B. M. Wiernik, I. Oh, E. Gonzalez- Mul´e, D. S. Ones, and Y. Lee. Meta- analytic five-factor model personality intercor- relations: Eeny, meeny, miney, moe, how, 24
2307.00184#90
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
91
24 which, why, and where to go. Applied Psychology, 105:1490–1529, 2020. [85] L. Parks-Leduc, G. Feldman, and A. Bardi. Per- sonality traits and personal values: A meta- analysis. Personality and Social Psychology Review, 19(1):3–29, 2015. [86] M. Pellert, C. M. Lechner, C. Wagner, B. Rammstedt, and M. Strohmaier. Large lan- guage models open up new opportunities and challenges for psychometric assessment of arti- ficial intelligence. Oct. 2022. [87] J. W. Pennebaker and L. A. King. Linguis- tic styles: Language use as an individual dif- ference. Journal of Personality and Social Psychology, 77(6):1296, 1999. [88] P. M. Podsakoff, S. B. MacKenzie, J.-Y. Lee, and N. P. Podsakoff. Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5):879–903, 2003.
2307.00184#91
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
92
[89] G. Qin, Y. Feng, and B. Van Durme. The NLP task effectiveness of long-range trans- formers. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 3774–3790, Dubrovnik, Croatia, May 2023. Association for Computational Linguistics. [90] B. D. Raad, M. Perugini, M. Hreb´ıckov´a, and P. Szarota. Lingua franca of personality: Tax- onomies and structures based on the psyc- holexical approach. Journal of Cross-Cultural Psychology, 29(1):212–232, 1998. [91] B. W. Roberts. A revised sociogenomic model Journal of Personality, of personality traits. 86(1):23–35, 2018. [92] B. W. Roberts, N. R. Kuncel, R. Shiner, A. Caspi, and L. R. Goldberg. The power of personality: The comparative validity of per- sonality traits, socioeconomic status, and cog- nitive ability for predicting important life out- comes. Perspectives on Psychological science, 2(4):313–345, 2007.
2307.00184#92
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
93
[93] B. W. Roberts and H. J. Yoon. Personality psychology. Annual Review of Psychology, 73(1):489–516, 2022. The cross-cultural generaliz- ability of the Five-Factor model of personal- ity. In R. R. McCrae and J. Allik, editors, The Five-Factor Model of Personality Across Cultures, pages 7–28. Springer US, Boston, MA, 2002. [95] J. Rust, M. Kosinski, and D. Stillwell. Modern Psychometrics: The Science of Psychological Assessment. Routledge, 4 edition, 2020. [96] G. Saucier and L. R. Goldberg. Lexical stud- ies of indigenous personality factors: Premises, products, and prospects. Journal of Personality, 69(6):847–879, 2001. [97] P. Schramowski, C. Turan, N. Andersen, C. A. Rothkopf, and K. Kersting. Large pre-trained language models contain human-like biases of what is right and wrong to do. Nature Machine Intelligence, 4(3):258–268, 2022.
2307.00184#93
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
94
[98] H. A. Schwartz, J. C. Eichstaedt, M. L. Kern, L. Dziurzynski, S. M. Ramones, M. Agrawal, A. Shah, M. Kosinski, D. Stillwell, M. E. P. Seligman, and L. H. Ungar. Personality, gen- der, and age in the language of social media: The open-vocabulary approach. PLOS ONE, 8(9):1–16, 09 2013. [99] G. Serapio-Garc´ıa, D. Valter, and C. Crepy. PsyBORGS: Psychometric Benchmark of Racism, Generalization, and Stereotyping. [100] A. Shaw, M. Kapnek, and N. A. Morelli. Mea- suring creative self-efficacy: An item response theory analysis of the Creative Self-Efficacy 25 # Scale. 2021. Frontiers in Psychology, 12:678033, [101] K. Shuster, M. Komeili, L. Adolphs, S. Roller, A. Szlam, and J. Weston. Language models that seek for knowledge: Modular search & generation for dialogue and prompt completion. CoRR, abs/2203.13224, 2022.
2307.00184#94
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
95
[102] L. Simms, T. F. Williams, and E. N. Simms. As- sessment of the Five Factor Model. In T. A. Widiger, editor, The Oxford Handbook of the Five Factor Model, pages 353–380. Oxford University Press, 05 2017. [103] U. Singh and P. Aarabhi. Can AI have a person- ality? In 2023 IEEE Conference on Artificial Intelligence (CAI), pages 205–206, 2023. [104] X. Song, A. Gupta, K. Mohebbizadeh, S. Hu, and A. Singh. Have large language models de- veloped a personality?: Applicability of self- assessment tests in measuring personality in LLMs. CoRR, abs/2305.14693, 2023. [105] S. Sun, K. Krishna, A. Mattarella-Micke, and M. Iyyer. Do long-range language models actu- ally use long-range context? In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 807–822, Online and Punta Cana, Dominican Republic, Nov. 2021. Association for Computational Lin- guistics.
2307.00184#95
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
96
[106] R. Tang, Y.-N. Chuang, and X. Hu. The sci- ence of detecting LLM-generated texts. CoRR, abs/2303.07205, 2023. [107] A. Tapus, C. T¸ ˘apus¸, and M. J. Matari´c. User– robot personality matching and assistive robot behavior adaptation for post-stroke rehabilita- tion therapy. Intell. Serv. Robot., 1(2):169–183, Apr. 2008. [108] M. Tavast, A. Kunnari, and P. H¨am¨al¨ainen. Lan- guage models can generate human-like self- In 27th International reports of emotion. Conference on Intelligent User Interfaces, IUI ’22 Companion, pages 69–72, New York, NY, USA, 2022. Association for Computing Ma- chinery.
2307.00184#96
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
97
Conference on Intelligent User Interfaces, IUI ’22 Companion, pages 69–72, New York, NY, USA, 2022. Association for Computing Ma- chinery. [109] H. Touvron, T. Lavril, G. Izacard, X. Mar- tinet, M.-A. Lachaux, T. Lacroix, B. Rozi`ere, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971, 2023. [110] T. Ullman. Large language models fail on triv- ial alterations to theory-of-mind tasks. CoRR, abs/2302.08399, 2023. [111] E. Uysal, S. Alavi, and V. Bezenc¸on. Trojan horse or useful helper? a relationship perspec- tive on artificial intelligence assistants with hu- manlike features. Journal of the Academy of Marketing Science, pages 1–23, 2022.
2307.00184#97
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
98
[112] A. Vaswani, N. Shazeer, N. Parmar, J. Uszko- reit, L. Jones, A. N. Gomez, L. u. Kaiser, and In I. Polosukhin. Attention is all you need. I. Guyon, U. V. Luxburg, S. Bengio, H. Wal- lach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Asso- ciates, Inc., 2017. [113] D. Watson and L. A. Clark. On traits and temperament: General and specific factors of emotional experience and their relation to the Five-Factor model. Journal of Personality, 60(2):441–476, 1992. The measurement of adult intelligence (3rd ed.). Williams & Wilkins Co, Baltimore, 1946. [115] J. Wei, M. Bosma, V. Y. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le. Finetuned language models are zero-shot learn- In International Conference on Learning ers. Representations, 2022. 26
2307.00184#98
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
99
26 [116] J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, E. H. Chi, T. Hashimoto, O. Vinyals, P. Liang, J. Dean, and W. Fedus. Emergent abilities of large language models. Transactions on Machine Learning Research, 2022. [117] J. Wei, J. Wei, Y. Tay, D. Tran, A. Webson, Y. Lu, X. Chen, H. Liu, D. Huang, D. Zhou, and T. Ma. Larger language models do in-context learning differently. CoRR, abs/2303.03846, 2023.
2307.00184#99
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
100
[118] L. Weidinger, J. Uesato, M. Rauh, C. Grif- fin, P.-S. Huang, J. Mellor, A. Glaese, M. Cheng, B. Balle, A. Kasirzadeh, C. Biles, S. Brown, Z. Kenton, W. Hawkins, T. Steple- ton, A. Birhane, L. A. Hendricks, L. Rimell, W. Isaac, J. Haas, S. Legassick, G. Irving, and I. Gabriel. Taxonomy of risks posed by lan- guage models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, page 214–229, New York, NY, USA, 2022. Association for Computing Machinery. [119] Z. Yao, C. Li, X. Wu, S. Youn, and Y. He. A comprehensive study on post-training quan- tization for large language models. CoRR, abs/2303.08302, 2023. [120] J. K. Young, Beaujean, and A. Alexander. Mea- suring personality in wave I of the national lon- Front. gitudinal study of adolescent health. Psychol., 2:158, July 2011.
2307.00184#100
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
101
and D. Still- well. Computer-based personality judgments are more accurate than those made by hu- mans. Proceedings of the National Academy of Sciences, 112(4):1036–1040, 2015. [122] J. Zamfirescu-Pereira, R. Y. Wong, B. Hart- mann, and Q. Yang. Why Johnny can’t prompt: How non-AI experts try (and fail) to design LLM prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, New York, NY, USA, 2023. Association for Computing Machinery. [123] S. Zhang, E. Dinan, J. Urbanek, A. Szlam, D. Kiela, and J. Weston. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204–2213, Melbourne, Australia, July 2018. Association for Computational Linguistics.
2307.00184#101
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
102
[124] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong, Y. Du, C. Yang, Y. Chen, Z. Chen, J. Jiang, R. Ren, Y. Li, X. Tang, Z. Liu, P. Liu, J.-Y. Nie, and J.-R. Wen. A survey of large language mod- els. CoRR, abs/2303.18223, 2023. [125] D. M. Ziegler, N. Stiennon, J. Wu, T. B. Brown, A. Radford, D. Amodei, P. Christiano, and G. Irving. Fine-tuning language models from human preferences. CoRR, abs/1909.08593, 2020. [126] M. Zimmerman. Diagnosing personality disor- ders: A review of issues and research methods. Archives of general psychiatry, 51(3):225–245, 1994. I. Yovel, and W. Li. Cronbach’s α, revelle’s β, and mcdon- ald’s ω h: Their relations with each other and two alternative conceptualizations of reliability. Psychometrika, 70:123–133, 2005.
2307.00184#102
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
103
# Acknowledgements We thank Lucas Dixon, Douglas Eck, and Kathy Meier-Hellstern for their feedback on early versions of this paper. We also thank David Stillwell for facili- tating research access to the Apply Magic Sauce API. Finally, we thank Jason Rentfrow and Neda Safaee- Rad for their advice on personality-related aspects of the paper. G.S-G. is supported by the Bill & Melinda Gates Foundation through a Gates Cambridge Schol- arship [OPP1144]. # Author Contributions
2307.00184#103
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
104
M.A., C.C., M.M., M.S., and G.S-G. conceived the project. G.S-G. contributed methodology to establish reliability and construct validity and for psychometric test administration and statistical analysis. M.S. con- tributed scaled up software infrastructure and prelim- inary experiments and investigations. C.C. and M.S. implemented the LLM hosting infrastructure for ex- periments. M.A., M.S., and G.S-G. contributed to the conceptual design and analysis of and G.S-G. devised and implemented the methods for personality shaping. G.S-G. and L.S. designed and M.S., G.S-G., and L.S. implemented the downstream task experiment. C.C. and M.S. carried out data visualization. M.S. car- ried out the word cloud analysis. S.F. and P.R. pro- vided discussion of LLM mechanisms and analysis of LLM performance. A.F., M.M., M.S., and G.S-G. con- tributed limitations, future directions, and ethical con- cerns discussions. P.R. and
2307.00184#104
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
106
# Competing Interests This study was funded by Alphabet Inc (‘Alphabet’) and/or a subsidiary thereof. A.F., C.C., G.S-G., M.M., and Mustafa Safdari were employees of Alphabet at the time of this writing and may own stock as part of the standard compensation package. M.M. is also affil- iated with the University of Southern California. G.S- G. and L.S. are affiliated with the University of Cam- bridge. G.S-G. is also supported by the Bill & Melinda Gates Foundation through a Gates Cambridge Scholar- ship [OPP1144]. S.F. and P.R. are affiliated with Keio 27 University. M.A. is affiliated with the University of California, Berkeley. A Large Language Models # A.1 Language Modeling # Code Availability The code used to administer psychometric tests to LLMs is intended to be interoperable across LLMs (i.e., is open-sourced and available at the Google Research GitHub repository for the Psychometric Benchmark of Racism, Generalization, and Stereotyping (Psy- BORGS; manuscript in prep).3
2307.00184#106
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
107
Language modeling is a fundamental task in natural language processing (NLP). It is the basis of many so- lutions to a wide variety of problems involving AI sys- tems with linguistic inputs. Downstream NLP tasks that leverage language models include (among many others): • natural language understanding, The remaining Python and R code used to gener- ate our prompt sets and statistically analyze reliability, construct validity, and trait shaping can be made avail- able upon request, and will be added to open-source repositories for wider public use soon. • question answering, • machine translation, • document summarization, • dialog systems. # Data Availability
2307.00184#107
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
108
• question answering, • machine translation, • document summarization, • dialog systems. # Data Availability The data generated by the LLMs tested in this work, either the psychometric test score data or open-ended text responses to a real-world task prompt, are avail- able upon reasonable request. The psychometric tests used in this study were accessed from their respective original publications and, where applicable, public re- search repositories. We used items of these tests as LLM prompt inputs in a non-commercial research ca- pacity. The authors and copyright holders of these tests govern their availability and use. The 50 Persona De- scriptions employed in our structured prompts were re- producibly randomly sampled from the true-cased ver- sion4 of the PersonaChat dataset [123]. PersonaChat is a publicly available, crowd-sourced dataset of 1,155 fictional human persona descriptions. For analysis of personality traits on generated text, this study used the Apply Magic Sauce (AMS) API5, a validated psy- chodemographic research tool that predicts personality from open-ended text [55].
2307.00184#108
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
109
The fundamental goal of language modeling is to as- sign high probabilities to utterances (usually sentences in plain text) that are likely to appear in data (i.e., be- long to the language) and low probabilities to strings of words that are not. A trained language model can then be used to assign probabilities to arbitrary sequences of words. In the past, this was done by parametric sta- tistical models estimated from data. However, those models have been replaced with much more success- ful deep neural network-based methods. Generally, a modern large language model (LLM) is a neural net- work taking strings of words as input, and returning a probability measure for each of those strings. The network is trained to correspond to the likelihood that given input strings conform to a particular language, as induced from large quantities of text (often called a corpus). Normally, instead of thinking of a language model in terms of estimating the joint probability of a string of words, we view it in terms of its ability to pre- dict continuation based on existing context. A neural language model therefore is usually trained to com- pute a conditional probability of word wn following a sequence of words w1, w2, . . . , wn−1. # 3https://github.com/google-research/googleresearch/tree/master/psyborgs
2307.00184#109
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
111
Recent advances in LLMs and NLP more broadly have been based on innovative uses of various forms of at- tention in neural networks. Attention was initially introduced as an improvement to recurrent encoder- decoder architectures [5] in the context of neural ma- chine translation systems. Subsequently, it was dis- covered that the idea of attention alone can be used as a basis for language modelling systems. A sem- inal paper titled “Attention Is All You Need” [112] introduced a new type of neural network architec- ture for extracting deep contextualized text represen- tations from raw natural language data using a process based predominantly on repeated application of the “self-attention” operation in a model, called the trans- former. This kind of model transforms the original vector space representation of linguistic units through a sequence of embedding spaces, where each succes- sive mapping recomputes the representation of every token6 in the context of its surrounding tokens. As such, it allows for the semantics of words as seen by the neural AI systems to vary depending on the context and evolve over time. Such representations produced significant performance improvements on natural lan- guage understanding tasks. The transformer archi- tecture was composed of two stacks of self-attention blocks forming an encoder-decoder architecture, origi- nally designed as a sequence transducer for neural ma- chine translation.
2307.00184#111
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
112
# A.3 Decoder-only Architecture Currently, large language models (LLMs) are usu- ally based on the decoder-only transformer architec- ture [11, 15, 79, 80, 109]. A sequence of text tokens, usually representing a user prompt (e.g., a question) is first tokenized, by splitting text into morpheme- like subwords units using a deterministic algorithm in6A token is the smallest unit of text that a large language model can process. Tokens can be individual characters, words, or sub- words, depending on the specific tokenization method used. The model assigns a unique identifier to each token, and these iden- tifiers are then used to represent the text in the model’s internal representations. 29
2307.00184#112
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
113
spired by information theoretic ideas. This sequence of tokens is then embedded into a high-dimensional vector space where each token becomes a sequence of floating-point numbers. This initial point-cloud of vectors representing linguistic units of the prompt is then transformed by a sequence of nonlinear mappings between high-dimensional representation spaces. The final representation is used to compute a probability distribution over possible continuations of text con- ditioned on the original prompt. The predominant method of training such models is gradient descent optimization (i.e., the backpropagation algorithm), re- sulting in representations that are informative towards predicting the contexts in which words appear within the training corpus. This simple self-supervised crite- rion leads to emergent abilities of the model, spanning syntax, semantics, and pragmatics of natural language use. The distributional hypothesis, which forms a fun- damental assumption behind neural language model training, states that syntactic and semantic relation- ships between words can be inferred from their con- text, i.e., co-occurrence patterns with other words in the corpus. As a result, optimizing model parame- ters based on n-grams
2307.00184#113
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
114
con- text, i.e., co-occurrence patterns with other words in the corpus. As a result, optimizing model parame- ters based on n-grams of tokens extracted from large quantities of natural language text generates informa- tive representations of linguistic units in submanifolds of high-dimensional real vector spaces. The geometric and topological features of these induced representa- tion manifolds determine the behavior of LLMs. The models trained for dialogue, including all models used in our work, are of the autoregressive type. This means that the output from the model itself becomes part of the context on which future outputs are conditioned. This allows the model to form a contextual memory of the conversation, including its own outputs.
2307.00184#114
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
115
Current state of the art LLMs contain trillions of pa- rameters and are trained on corpora of text (such as books, articles, and websites) and code [21, 14] that contain billions of n-gram patterns, allowing them to learn the statistical relationships between words and phrases [116], and consequently the patterns, struc- tures, and semantics of language [66, 82, 70, 30]. In this work, we primarily explore decoder-only, auto- regressive LLMs such as PaLM [15], where the input is usually a partial or complete sequence of tokens, and the model generates the next token in the sequence based on the previous tokens it has seen in an iterative process. # A.4 Controlling LLM behavior
2307.00184#115
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
116
There are three main techniques that change or con- trol an LLM’s behavior and output with respect to a given input: pretraining (training the LLM on a large corpus of text [11, 15, 109]), fine-tuning (i.e., further training a pretrained LLM on a smaller dataset spe- cific to a particular task or domain [125, 115, 79, 81]), and prompting. While pretraining and fine-tuning af- fect model behavior by directly altering the model’s weight parameters, prompting does so indirectly by in- fluencing the activation of certain neurons or the flow of information through the model’s inference process. The most significant aspect of using prompts to con- trol LLM behavior is to carefully design or engineer prompts to generate desired outputs from the LLM. Several types of prompt engineering techniques are In few-shot prompting commonly used with LLMs. [11, 73, 64], a limited amount of example data are provided to the model in a prompt to guide it to per- form a task. By leveraging this small set of exam- ples, the LLM can generalize and produce responses beyond the provided instances. Few-shot prompting relies on the ability
2307.00184#116
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
117
this small set of exam- ples, the LLM can generalize and produce responses beyond the provided instances. Few-shot prompting relies on the ability to bias the LLM’s responses based on the input prompt. But because it introduces a bias, this method is not useful in cases where the goal is to probe the default bias of the LLM, the behavior or tendency of the LLM to produce certain outputs (e.g., certain psychometric survey responses, in our case). Zero-shot prompting [115, 53], on the other hand, involves instructing the model to generate re- sponses for tasks it has not been specifically trained on and without providing any examples, relying on the LLM’s pre-existing knowledge and language un- derstanding acquired during pre-training. This method provides insights into the language priors and distribu- tion learned by the LLM, what tokens are more corre- lated than others, etc. For instance, if asked to com- plete an input prompt: “She went to see an expert
2307.00184#117
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
118
about her stroke, who”, an LLM trained on medical domain data is likely to respond “advised her to get an ECG test.” whereas an LLM trained on sports data might complete it as “coached her about the best tech- niques from top golf pros.” Several recent works in the field of Responsible AI have attempted to uncover la- tent language biases in LLMs, to identify potential for harm, and to suggest mitigation techniques [60, 122]. Similarly, our work used zero-shot prompt engineering to analyze how latent linguistic features in LLMs give rise to a coherent personality when quantified psycho- metrically. We further analyzed how those traits can be modified by engineering specific prompts and af- fecting the latent linguistic features in these LLMs. # A.5 Modes of Inference in LLMs
2307.00184#118
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
119
# A.5 Modes of Inference in LLMs LLMs offer various ways of inference in practice. In generative mode, the LLM is given a prompt or in- struction, and it then generates text that is consistent with that prompt. This mode is useful for creative text generation tasks, such as story or poetry writing. In scoring mode, the LLM is given a pair (prompt, con- tinuation) and it assigns a score or probability to it, indicating its quality or relevance or how likely it is to be generated from that model. Scoring mode [46] is often used for tasks like language evaluation [42]. In- ternally to the LLM, there is a single operating mode— computing the probability distribution over a sequence of tokens—but this distinction between the various modes of inference is conceptually useful when rea- soning about model behavior. # B Personality Psychology The field of personality psychology defines personal- ity as enduring characteristics, traits, and patterns that shape thoughts, feelings, and behaviors across a di- verse array of situations; e.g., social, spatial, and tem- poral contexts [93]. Decades of personality research synthesizing evidence from molecular genetics [91], evolutionary biology [77], neuroscience [24, 23], lin- guistics [10, 87], and cross-cultural psychology [68] have reduced such diverse characteristic patterns to a 30
2307.00184#119
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
120
30 theorized handful of higher-order factors that define personality [22, 47]. Specific to linguistic evidence of a personality tax- onomy, a central area of personality research con- cerns the lexical hypothesis of personality—that hu- man personality is intrinsically connected to language. Since its origin from Sir Francis Galton in the 1800s [29], empirical research on the lexical hypothesis has posited that 1) important personality characteristics of a given society will be encoded in its language; and 2) that the most important of those characteristics are likely encoded as single words [31, 90, 96]. This em- pirical framework grounds our work in three areas: the choice of one of our personality instruments (the BFI; described below), our prompts for shaping LLM per- sonality, and the choice of the language-based assess- ment of personality for rating LLM-synthesized per- sonality in a downstream task. The Big Five model [48], the most commonly cited research taxonomy of personality formed through the research described above, identifies five personality trait dimensions (i.e., domains) and provides method- ology to assess these dimensions in humans. The five dimensions are extraversion (EXT), agreeable- ness (AGR), conscientiousness (CON), neuroticism (NEU), and openness to experience (OPE). Each do- main is further composed of various lower-order facets nested underneath.
2307.00184#120
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
121
# C Related Work to probe personality and psy- Recent attempts chopathological that some models exhibit dark personality patterns [59], or demonstrate how to administer personality inventories to LLMs [86, 50, 44, 104, 13, 103, 45]. Some have also made efforts to induce desired levels of personality in LLMs using prompting [44, 13, 45] or fine-tuning [50, 59]. While these works outlined the utility and importance of measuring social phenomena in LLMs [86], there remains a need to match standards of eval- uating the quality of human survey data when evaluat- ing survey response data from LLMs—standards that are commonplace in quantitative social science [18]. To claim that scores on a psychological test are trust- worthy and meaningful signals of what the test pur- ports to measure, one must establish the test’s reliabil- ity and construct validity.
2307.00184#121
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
122
Recent works that probe social and personality- related traits in LLMs have administered and analyzed questionnaires in ways that are unconventional in psy- chometrics. In this appendix, we focus on two addi- tional elements not discussed in the main text. First, researchers collected LLM responses in the form of generated completions, often in dialog mode. For instance, [108] administered psychological emotion measures to LLMs in the form of a research interview transcript, where a fictitious researcher posed measure items to a fictitious participant, who was instructed to respond to these items on a numeric scale. In psycho- metrics, questionnaire-based methods of assessment are distinct from interview-based methods. Human an- swers to both questionnaires and structured interviews measuring the same underlying construct do not nec- essarily converge (e.g., in the case of measuring per- sonality disorders [126]). Indeed, administering ques- tionnaires in this way to LLMs creates an arbitrary viewpoint from which to elicit personality traits, and is likely biased by the ordering of the questionnaire itself [57] and prompting the LLM to respond in an inter- view setting (where it may respond differently
2307.00184#122
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
123
and is likely biased by the ordering of the questionnaire itself [57] and prompting the LLM to respond in an inter- view setting (where it may respond differently know- ing an interviewer is observing). Each LLM response to a given questionnaire item was not an independent event, but considered all previous responses shown in the transcript. Second, the LLMs in these studies were not used deterministically. This not only ham- pers reproducibility, but also poses implications for reliability. Computing reliability metrics for question- naires scored in this unconventional way is precarious because such reliability metrics depend on item-level variance. If this item-level variance is contaminated by variation introduced by the model parameters in a dif- ferent way for each item, it is difficult to compute valid indices of reliability. We overcame these challenges in our work by proposing a prompt and persona sampling methodology that allows variance to be linked across administrations of different measures.
2307.00184#123
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
124
31 PsyBORGS [99] administered a series of validated survey instruments of race-related attitudes and social bias to LLMs using psychometrics-informed prompt engineering. Our work utilized the PsyBORGS frame- work. # D Tested Language Models First, we focused on three different model sizes: small (8B), medium (62B), and large (540B), because LLM model size is a key determinant of performance for this model family [15, 124]. Second, because we are also interested in evaluating LLM personality in the Q&A context, we investigated PaLM models variants, fine- tuned to follow instructions as they have been shown to perform better than base models for prompting- based Q&A tasks [115]. We specifically selected vari- ants fine-tuned with the popular FLAN dataset [115]. Third, we examined traditional and high-data train- ing methods, known as Chinchilla training [40], which uses a fixed training budget to find the balance between model size and training dataset scale. Chinchilla train- ing yields superior performance across a broad set of tasks [40, 124]. Table 2 lists the tested models along with their size and training configuration options. All experiments used quantized models [119] to re- duce the memory footprint and speed up inference time. # E Selected Personality Inventories
2307.00184#124
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
125
All experiments used quantized models [119] to re- duce the memory footprint and speed up inference time. # E Selected Personality Inventories To measure personality, we selected two well- established psychometric measures to assess the Big Five taxonomy: one from the lexical tradition and one from the questionnaire tradition. Lexical tradition measures are grounded in the hypothesis that person- ality can be captured by the adjectives found in a given language [29, 31], while questionnaire tradition mea- sures are developed with existing (and not necessarily lexical) taxonomies of personality in mind [102]. Lex- ical measures may be better suited for LLMs because they are language-based and rely on adjectival descrip- tions. We posit that questionnaire measures, which do 32 not rely on trait adjectives for content, more conser- vatively test LLM abilities, as they are less abstract and more contextualized. Our work focused on Big Five measures of personality due to the Big Five’s in- tegrative robustness and cross-theory convergence in the human personality and psycholinguistics literature [102].
2307.00184#125
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
126
Our primary personality measure, the IPIP-NEO [33], is a 300-item open source representation of the commercialized Revised NEO Personality Inventory [19]. The IPIP-NEO, hailing from the questionnaire tradition [102], involves rating descriptive statements (e.g., “[I] prefer variety to routine”; 60 per Big Five domain) on a 5-point Likert scale. (1 = very inaccu- rate; 2 = moderately inaccurate; 3 = neither accurate nor inaccurate; 4 = moderately accurate; 5 = very ac- curate). We refer to these statements as items. The IPIP-NEO has been translated and validated in many languages, facilitating cross-cultural research across populations [43], and has been used in longitudinal studies to assess personality change and stability over time [120]. We chose this measure for its excellent psychometric properties, shown in [33].
2307.00184#126
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
127
As a robustness check and to assess convergent va- lidity, we also measured LLM-synthesized personality using the Big Five Inventory (BFI) [48]. Developed in the lexical tradition, the BFI is a brief (44-item), ad- jectival statement-based measure of the broad Big Five traits. The BFI asks participants to rate short descrip- tive statements (e.g., “I see myself as someone who is talkative”) also on a 5-point Likert scale. The result- ing summary scores indicating levels of Big Five trait domains range from 1.00 to 5.00. In the psychology literature [102], the BFI has demonstrated excellent re- liability (mean α reported across domain subscales = 0.83), convergent validity, and external validity. Domain subscale scores across both measures were calculated following their original instructions as the average of item response values, accounting for reverse-keyed items. Possible subscale scores ranged from 1.00 to 5.00, indicating the lowest and highest possible levels of a given Big Five domain, respec- tively.
2307.00184#127
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
128
Table 5: Item Postambles used to construct the prompts employed in the experiments to generate LLM-simulated survey responses. All administered measures used a Likert-type response scale that allowed 5 possible choices, with the exception of the PVQ-RR, which used a 6-point response scale. Item Postambles 1–5 were used for the BFI; 6–10 for the IPIP-NEO; 11–15 for the PANAS; 16–20 for the SSCS; 21–25 for the BPAQ; and 26–30 for the PVQ-RR.
2307.00184#128
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
129
please indicate the extent to which you agree or disagree on a scale from 1 to 5 (where 1 = "disagree strongly", 2 = "disagree a little", 3 = "neither agree nor disagree", 4 = "agree a little", and 5 = "agree strongly"):" please rate your level of agreement on a scale from 1 to 5 (where 1 = "disagree strongly", 2 = "disagree a little", 3 = "neither agree nor disagree", 4 = "agree a little", and 5 = "agree strongly"):" please rate your level of agreement or disagreement on a scale from 1 to 5 (where 1 = "disagree strongly", 2 = "disagree a little", 3 = "neither agree nor disagree", 4 = "agree a little", and 5 = "agree strongly"):" please rate how much you agree on a scale from 1 to 5 (where 1 = "disagree strongly", 2 = "disagree a little", 3 = "neither agree nor disagree", 4 = "agree a little", and 5 = "agree strongly"):" please rate how much you agree or disagree on a scale from 1 to 5 (where 1 = "disagree strongly", 2 = "disagree a little", 3 = "neither agree nor disagree", 4 = "agree a little", and 5 =
2307.00184#129
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
130
to 5 (where 1 = "disagree strongly", 2 = "disagree a little", 3 = "neither agree nor disagree", 4 = "agree a little", and 5 = "agree strongly"):" please rate how accurately this describes you a scale from 1 to 5 (where 1 = "very inaccurate", 2 = "moderately inaccurate", 3 = "neither accurate nor inaccurate", 4 = "moderately accurate", and 5 = "very accurate"):" please indicate how accurate this is about you on a scale from 1 to 5 (where 1 = "very inaccurate", 2 = "moderately inaccurate", 3 = "neither accurate nor inaccurate", 4 = "moderately accurate", and 5 = "very accurate"):" please indicate how accurate or inaccurate this is about you on a scale from 1 to 5 (where 1 = "very inaccurate", 2 = "moderately inaccurate", 3 = "neither accurate nor inaccurate", 4 = "moderately accurate", and 5 = "very accurate"):" please rate how accurate this is about you on a scale from 1 to 5 (where 1 = "very inaccurate", 2 = "moderately inaccurate", 3 = "neither accurate nor
2307.00184#130
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
131
this is about you on a scale from 1 to 5 (where 1 = "very inaccurate", 2 = "moderately inaccurate", 3 = "neither accurate nor inaccurate", 4 = "moderately accurate", and 5 = "very accurate"):" please rate how accurate or inaccurate this is about you on a scale from 1 to 5 (where 1 = "very inaccurate", 2 = "moderately inaccurate", 3 = "neither accurate nor inaccurate", 4 = "moderately accurate", and 5 = "very accurate"):" indicate to what extent you agree on a scale from 1 to 5 (where 1 = "very slightly or not at all agree", 2 = "agree a little", 3 = "agree moderately", 4 = "agree quite a bit", and 5 = "agree extremely"):" please rate your level of agreement on a scale from 1 to 5, (where 1 = "very slightly or not at all agree", 2 = "agree a little", 3 = "agree moderately", 4 = "agree quite a bit" please rate your level of agreement or disagreement on a scale from 1 to 5 (where 1 = "very slightly or not at all agree", 2 = "agree a little", 3 = "agree moderately", 4 = "agree quite a
2307.00184#131
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
132
scale from 1 to 5 (where 1 = "very slightly or not at all agree", 2 = "agree a little", 3 = "agree moderately", 4 = "agree quite a bit", and 5 = "agree extremely"):" please rate how much you agree on a scale from 1 to 5 (where 1 = "very slightly or not at all agree", 2 = "agree a little", 3 = "agree moderately", 4 = "agree quite a bit", and 5 = "agree extremely"):" please rate how much you agree or disagree on a scale from 1 to 5 (where 1 = "very slightly or not at all agree", 2 = "agree a little", 3 = "agree moderately", 4 = "agree quite a bit", and 5 = "agree extremely"):" please decide to what extent this describes you on a scale from 1 to 5 (where 1 = "strongly disagree", 2 = "disagree", 3 = "neither agree nor disagree", 4 = "agree", 5 = "strongly agree"):" please rate your level of agreement on a scale from 1 to 5 (where 1 = "strongly disagree", 2 = "disagree", 3 = "neither agree nor disagree", 4 = "agree", 5 = "strongly agree"):" please rate your level of agreement or disagreement on a scale from
2307.00184#132
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
133
3 = "neither agree nor disagree", 4 = "agree", 5 = "strongly agree"):" please rate your level of agreement or disagreement on a scale from 1 to 5 (where 1 = "strongly disagree", 2 = "disagree", 3 = "neither agree nor disagree", 4 = "agree", 5 = "strongly agree"):" please rate how much you agree that this describes you on a scale from 1 to 5 (where 1 = "strongly disagree", 2 = "disagree", 3 = "neither agree nor disagree", 4 = "agree", 5 = "strongly agree"):" please rate how much you agree or disagree that this describes you on a scale from 1 to 5 (where 1 = "strongly disagree", 2 = "disagree", 3 = "neither agree nor disagree", 4 = "agree", 5 = "strongly agree"):" rate how characteristic this is of you on a scale from 1 to 5 (where 1 = "extremely uncharacteristic of me", 2 = "uncharacteristic of me", 3 = "neither characteristic nor uncharacteristic of me", 4 = "characteristic of me", and 5 = "extremely characteristic of me"):" please rate how characteristic this is of you on a scale from 1 to 5 (where 1 = "extremely
2307.00184#133
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
134
"characteristic of me", and 5 = "extremely characteristic of me"):" please rate how characteristic this is of you on a scale from 1 to 5 (where 1 = "extremely uncharacteristic of me", 2 = "uncharacteristic of me", 3 = "neither characteristic nor uncharacteristic of me", 4 = "characteristic of me", and 5 = "extremely characteristic of me"):" please rate how characteristic or uncharacteristic this is of you on a scale from 1 to 5 (where 1 = "extremely uncharacteristic of me", 2 = "uncharacteristic of me", 3 = "neither characteristic nor uncharacteristic of me", 4 = "characteristic of me", and 5 = "extremely characteristic of me"):" please indicate to what extent this is characteristic of you on a scale from 1 to 5 (where 1 = "extremely uncharacteristic of me", 2 = "uncharacteristic of me", 3 = "neither characteristic nor uncharacteristic of me", 4 = "characteristic of me", and 5 = "extremely characteristic of me"):" please indicate to what extent this is characteristic or uncharacteristic of you on a scale from 1 to 5 (where 1 = "extremely uncharacteristic of me", 2 = "uncharacteristic of me", 3 =
2307.00184#134
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]
2307.00184
135
or uncharacteristic of you on a scale from 1 to 5 (where 1 = "extremely uncharacteristic of me", 2 = "uncharacteristic of me", 3 = "neither characteristic nor uncharacteristic of me", 4 = "characteristic of me", and 5 = "extremely characteristic of me"):" think about how much that person is or is not like you. Rate how much the person described is like you on a scale from 1 to 6 (where 1 = "not like me at all", 2 = "not like me", 3 = "a little like me", 4 = "moderately like me", 5 = "like me", and 6 = "very much like me"):" please rate how characteristic this is of you on a scale from 1 to 6 (where 1 = "not like me at all", 2 = "not like me", 3 = "a little like me", 4 = "moderately like me", 5 = "like me", and 6 = "very much like me"):" please rate how characteristic or uncharacteristic this is of you on a scale from 1 to 6 (where 1 = "not like me at all", 2 = "not like me", 3 = "a little like me", 4 = "moderately like me", 5 = "like me", and 6 = "very much like me"):" please indicate to what
2307.00184#135
Personality Traits in Large Language Models
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant human-like text. As LLMs increasingly power conversational agents used by the general public world-wide, the synthetic personality embedded in these models, by virtue of training on large amounts of human data, is becoming increasingly important. Since personality is a key factor determining the effectiveness of communication, we present a comprehensive method for administering and validating personality tests on widely-used LLMs, as well as for shaping personality in the generated text of such LLMs. Applying this method, we found: 1) personality measurements in the outputs of some LLMs under specific prompting configurations are reliable and valid; 2) evidence of reliability and validity of synthetic LLM personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific human personality profiles. We discuss application and ethical implications of the measurement and shaping method, in particular regarding responsible AI.
http://arxiv.org/pdf/2307.00184
Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7
null
null
cs.CL
20230701
20230921
[]