doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.00184 | 136 | 3 = "a little like me", 4 = "moderately like me", 5 = "like me", and 6 = "very much like me"):" please indicate to what extent this is like you on a scale from 1 to 6 (where 1 = "not like me at all", 2 = "not like me", 3 = "a little like me", 4 = "moderately like me", 5 = "like me", and 6 = "very much like me"):" please indicate to what extent this is or is not like you on a scale from 1 to 6 (where 1 = "not like me at all", 2 = "not like me", 3 = "a little like me", 4 = "moderately like me", 5 = "like me", and 6 = "very much like me"):" | 2307.00184#136 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 138 | I like to garden. I like photography. I love traveling. I like to bake pies. Iâve a beard. I graduated high school. I like rap music. I live on a farm. I drive a truck. I blog about salt water aquarium ownership. I still love to line dry my clothes. Iâm allergic to peanuts. Iâll one day own a ferret. My mom raised me by herself and taught me to play baseball. Since young I ve loved to cook. I auditionated in a cooking show. I think Iâve talent for it. I took classes while growing up. My name is tom. I try to watch what I eat. I enjoy eating italian food. Pizza is my favorite. I am east asian. I live by a lake. I am a mother. I own a custom upholstery shop. Iâm a wife. I enjoy working out and learning new things. Iâm a student in college. Iâm studying software development. I play the guitar. Iâve three dogs at home. I hate to workout, but I need to. I am very good at the drums. I have a bicycle. I need to take my blood sugar everyday. I work in advertising. My mother is | 2307.00184#138 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 139 | but I need to. I am very good at the drums. I have a bicycle. I need to take my blood sugar everyday. I work in advertising. My mother is dead. I like to hike. Iâve a golden retriever. I write fiction for fun. I can never decide between a chili corn dog and a cheesy hot dog. I drive more than an hour each way to work. I prefer the night to the day, but I love sunshine. I am a grandparent at 44. I like to smell my own farts. My beer gut is so huge iâven T seen my feet in two years. I am from San Fransico. I am always the one who buys the beers. I like to place blame on other people even when I know it is my fault. I lived most of my life not knowing who Bob marley was. When I cut loose, I lose control. We help each other out in my family. I despise my boss. I work over 60 hours a week as a restaurant manager. I prefer the simpler times. I like simple jokes. Some jokes go too far. I like the flintstones. It is my universe, and everyone else is just a character in it. I work as a dental assistant in a ritzy part of | 2307.00184#139 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 140 | too far. I like the flintstones. It is my universe, and everyone else is just a character in it. I work as a dental assistant in a ritzy part of town. Iâve borderline personality disorder. At night, I party hard in the Atlanta club scene, and I never miss a music festival. I watch a lot of tv. I live alone. My favorite food is a cheeseburger. I enjoy fishing. I work on cars for a living. Iâm an animal rights activist. I hope to retire to Florida. I played in a band for 17 years. My mother and father are both in the church choir. Iâve taken formal music lessons since I was 5. Iâm a musician. My best friend is in a band with me. I wish I could spend more time at home. I grew up in Kentucky. Iâm a veteran. My favorite book is enderâs game. I have a garden. I like to read. I am a vegan. I love country music. I love the beach. I like to read. Iâve depression and anxiety so I donât really go out a lot. I work at home, editing. I have a cat. I hope to move out soon. My favorite | 2307.00184#140 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 141 | depression and anxiety so I donât really go out a lot. I work at home, editing. I have a cat. I hope to move out soon. My favorite food is mushroom ravioli. I ve never met my father. My mother works at a bank. I work in an animal shelter. I love kids and dogs. I like to go shopping with my daughters. I like to cook. I love to chat with my friends. I swim often. I run track. I wear glasses all day. I take medication. I like to go on long hikes. I like to play volleyball. I like to come up with new hairstyles. I like to do my nails. I watch Jimmy Fallon s show every night. I have never kissed a woman. People notice how organized I am. I believe that I can achieve anything. I drive a lifted Chevy truck. I played football in high school. I am a roofer. I always have a beer after work. I love animals. My father worked for Ge. Green is my favorite color. I enjoy playing tennis. Iâm an aspiring singer. I try to watch what I eat. I enjoy eating italian food. Pizza is my favorite. My name is tom. I am east asian. In allergic to peanuts. | 2307.00184#141 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 142 | I try to watch what I eat. I enjoy eating italian food. Pizza is my favorite. My name is tom. I am east asian. In allergic to peanuts. I like eating vegetables. I love the Beatles. Iâm usually very shy. I have trouble getting along with family. I go to high school. Math is my favorite subject. I live in the United States. I am a boy. I have a job as an it agent. I like smoking weed. My dad works for stifle. I love rap music. Iâm a meataholic. I work in tv. I do not treat my girlfriend very well. I like to cook breakfast on sundays. I love to sing. I am a lesbian. I work on semi trucks for a living. My father was a driver himself. I got off the road when I married my sweetheart. I want to take her on vacations one day. My motor never stops running. I own a Iphone 7. I drink hot chocolate during the winter. Iâm allergic to seafood. My mother use to read me bed time stories. I am eighteen years old. Iâm going to majoring in business. I just bought my first car. I received a full scholarship to Florida state university. I live | 2307.00184#142 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 143 | am eighteen years old. Iâm going to majoring in business. I just bought my first car. I received a full scholarship to Florida state university. I live in a tiny house to save money. I collect single malt scotch. I listen to blues and jazz. I tend bar on the weekends. During the week I go to college to become a lawyer. I love to go horseback riding whenever I can. Iâm a mother of two beautiful boys. My family and I go camping every month. My favorite artist is Justin Bieber. I especially enjoy listening to the band the lumineers. I enjoy reading and walking on sunny days. Iâm a happy person. I sing many songs. I play piano. My favorite color is yellow. My boyfriend is in the army. My father is dead. My hair is short. Iâm a mother. Iâm a nurse at a hospital. My favorite band is the rolling stones. I love to read and cook. My favorite food is mexican food. I deliver baked goods in the state where I live. My favorite hobby is playing recreational baseball. I spend my weekends camping. Iâm a truck driver. My wife and two kids camp with me. | 2307.00184#143 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 144 | My favorite hobby is playing recreational baseball. I spend my weekends camping. Iâm a truck driver. My wife and two kids camp with me. I am argentinian. I like to wear boots. I have many girlfriends. I like to eat beef. I like to ride horses. I recently had a private lunch with will ferrell. I am trying to become a male model in hollywood. Iâm a huge fan of classical jazz. I am on a low carb diet. I want to put my photos to a music video staring Adam Levin. I want to travel the world taking photographs of my travels. I am a widow. I want to be a famous photographer. I am in the army. I fly airplanes. I enjoy building computers. I dropped out of college. I have three children. I live in the suburbs of a major city. I like to garden. I graduated college for secondary english education. I play guitar in the local band. I live on a small farm in Ohio. I am the youngest of three brothers. I have never been to the city. Iâm a widow. I want to put my photos to a music video staring Adam Levin. I want to travel the world taking photographs of my travels. | 2307.00184#144 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 145 | city. Iâm a widow. I want to put my photos to a music video staring Adam Levin. I want to travel the world taking photographs of my travels. I want to be a famous photographer. I like taking pictures. I still live at home with my parents. I play video games all day. Iâm 32. I eat all take out. My friend once bought me a car. I am disabled and cannot walk. I take vitamin c when I have a cold. I do not eat bread. My favorite season is winter. | 2307.00184#145 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 146 | 34
# F Simulating Population Variance Through Prompting
It was empirically necessary to introduce controlled variation in LLM-simulated survey data to assess their reliability and statistical relationships with outcomes of interest; in short, controlled variation was required to statistically test for reliability and construct validity. For instance, an Item Postamble presented the possi- ble standardized responses the model can choose from, e.g.,
please rate your agreement on a scale from 1 to 5, where 1 is âstrongly disagreeâ, 2 is âdisagreeâ, 3 is âneither agree nor disagreeâ, 4 is âagreeâ, and 5 is âstrongly agreeâ. | 2307.00184#146 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 147 | We customized five variations of Item Postambles for each administered measure, such that all five vari- ations would have parallel meanings across measures. Supplemental Table 5 lists all Item Postambles used in this work. This prompt design enabled thousands of variations of input prompts that could be tested, with two major advantages. First, variance in psycho- metric test responses created by unique combinations of the Persona Descriptions (see Supplemental Table 6), Item Instructions (see Supplemental Table 7), and Item Postambles enabled us to quantify the validity of personality measurements in LLMs. Unlike single point estimates of personality, or even multiple esti- mates generated from random resampling of LLMs, diverse distributions of personality scores conditioned on reproducible personas make it possible to compute correlations between convergent personality measures and external, personality-related constructs. Second, variance in Item Preambles and Postambles facilitated a built-in robustness check: it was critical to know if personality scores remained reliable and valid across modifications of context and instructions surrounding original test items. They were indeed reliable and valid for three of the five models tested.
# Table 7: Item Instructions used in Item Preambles across experiments to generate LLM-simulated survey responses.
# Item Instructions
Considering the statement, Thinking about the statement, Reflecting on the statement, Evaluating the statement, Regarding the statement,
# G Psychometrics | 2307.00184#147 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 148 | # Item Instructions
Considering the statement, Thinking about the statement, Reflecting on the statement, Evaluating the statement, Regarding the statement,
# G Psychometrics
Psychometrics, a quantitative subfield of psychology and education science, encompasses the statistical the- ory and technique of measuring unobservable, latent phenomena called constructs, like personality, intelli- gence, and moral ideology. Psychometrics is founda- tional to the development and validation of standard- ized educational tests (e.g., the SAT, LSAT, GRE) [3], medical and psychological clinical assessments [114], and large-scale public opinion polls [37]. | 2307.00184#148 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 149 | Psychometric tests (e.g., survey instruments, mea- sures, multi-item scales) are tools for quantifying la- tent psychological constructs like personality. Psy- chometric tests enable statistical modeling of the true levels of unobservable target constructs by relying on multiple indirect, yet observable, measurements across a sample of individuals drawn from a wider population. We refer to items as the individual ele- ments (i.e., descriptive statements, sometimes ques- tions) used within a psychometric test designed to measure attributes or characteristics of a construct. Items are usually rated on a rating scale- a standard- ized set of response choices that allows researchers to quantify subjective phenomena. A Likert-type scale is the most common rating scale that has respondents specify their level of agreement on a symmetric agree- disagree scale [61]. We refer to a subscale as a collec- tion of items, usually resulting from a factor analysis, aimed at measuring a single psychological construct. Measures are themed collections of subscales.
For example, the Big Five Inventory (BFI) [48] is
35 | 2307.00184#149 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 150 | For example, the Big Five Inventory (BFI) [48] is
35
a popular measure of personality; it comprises five multi-item subscales targeting each Big Five dimen- sion. BFI Extraversion, for instance, is a subscale within the BFI specifically targeting the dimension of extraversion. An example item under BFI Extraver- sion would read, â[I see myself as someone who] is talkative.â Participants rate their agreement with this item using the following 5-point Likert-type rating scale: 1 = disagree strongly; 2 = disagree a little; 3 = neither agree nor disagree; 4 = agree a little; 5 = agree strongly.
How do we know that psychometric tests measure what they claim to measure, i.e., how do we establish the reliability, accuracy, and utility of the measures of personality, and the constructs assessed in those mea- sures? Validated scientific frameworks for establish- ing the reliability and construct validity of a new psy- chometric test [17, 71, 18] incorporate (but are not lim- ited to) the following overarching standards:
⢠Reliability: Are test measurements dependable and consistent? In psychometrics, a testâs relia- bility can be established in terms of internal con- sistency and factor saturation. | 2307.00184#150 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 151 | â Internal consistency reliability: Is the test reliable across multiple measurements (i.e., its items)? In other words, do responses to the testâs items form consistent patterns? Are test items correlated with each other?
â Factor saturation: Do the testâs items re- flect the variance of one underlying factor or construct?
⢠Construct Validity: Do the test measurements actually reflect the underlying construct? This can be established by checking for convergent va- lidity, discriminant validity and criterion validity.
â Convergent Validity: Does the test corre- late with purported indicators (i.e., conver- gent tests) of the same or similar psycho- logical construct? These correlations are called convergent correlations.
36
â Discriminant Validity: Relative to their convergent correlations, are test scores rel- atively uncorrelated with scores on theoret- ically unrelated tests? These correlations are called discriminant correlations.
â Criterion Validity: Does the test correlate with theoretically-related, non-tested phe- nomena or outcomes? | 2307.00184#151 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 152 | â Criterion Validity: Does the test correlate with theoretically-related, non-tested phe- nomena or outcomes?
# G.1 Reliability: pendable? Is the Measurement DeThe hallmark characteristic of a good psychometric test (or any empirical measure) of a target construct is its reliability, which reflects its ability to âmea- sure one thing (i.e., the target construct) and only that thing, as precisely as possibleâ [18]. In this work, we balance our evaluations of reliability across three in- dices of reliabilityâCronbachâs Alpha (α), Guttmanâs Lambda 6 (λ6), and McDonaldâs Omega Ïâweighing the pros and cons of each.
α, the most widely-known measure of internal con- sistency reliability, captures how responses to each item of a scale correlate with the total score of that scale [20]. However, α has many documented limita- tions. For instance, it relies on the assumption that all items of a test measure the same underlying construct and it can be artificially inflated by a testâs number of items [127]. Cronbachâs α is computed as follows:
dq)
where k is the number of items on the test, Ï2 variance associated with each item i, and Ï2 overall variance of total scores. | 2307.00184#152 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 153 | dq)
where k is the number of items on the test, Ï2 variance associated with each item i, and Ï2 overall variance of total scores.
In contrast to α, λ6 evaluates the variance of each item that can be captured by a multiple regression of all other items [35]. It is less biased alternative to α because it is not affected by item differences in vari- ance, although it is also biased by the number of items on a test. Guttmanâs λ6 is calculated as:
(2)
where k is the number of items on the test, ei is the error term for item i, Vx is the variance of the to- tal test score. To test more robustly for reliability (in terms of how well a test measures one underlying fac- tor or construct) in a way that is unaffected by num- ber of items on a test, psychometricians compute Mc- Donaldâs Omega (Ï) [69, 127]. This metric is gener- ally considered a less biased composite test of reliabil- ity [127, 34]. McDonaldâs Ï uses confirmatory factor analysis to determine if items statistically form a sin- gle factor, or actually measure separate factors. It is calculated as:
(3) | 2307.00184#153 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 154 | (3)
where Ïh is McDonaldâs hierarchical omega, k is the number of items on the test, ti is the standardized item score for item i, Ï2 i is the variance of the standardized item score for item i, and rtt is the correlation between the total test score and the standardized total test score.
# G.2 Construct Validity: Is the Measurement Valid?
Since psychometric tests measure physically unob- servable constructs, such as personality traits, it is im- perative to establish that such tests measure what they claim to measure. This process is called establishing a testâs construct validity. Construct validity is a com- prehensive judgement of how the scores and the theo- retical rationale of a test reasonably reflect the under- lying construct the test intends to measure [72]. Re- cently, construct validity has become a crucial focus of AI responsibility and governance [41, 76]: opera- tionalizing social phenomena in algorithmic systems in a principled way (e.g., through construct valida- tion) is a core part of responsible AI. Bringing em- pirical rigor to the measurement of social constructs helps stakeholders make more informed judgments of characteristics that may be fair or harmful in AI sys- tems. For instance, if low agreeableness is harmful in AI systems, we need a principled way to measure it. | 2307.00184#154 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 155 | There is extant work on establishing the validity of measurements of personality as a theoretical con- struct [93, 22, 47], a powerful predictor of other impor- tant human traits and life outcomes [92, 9, 56] and its manifestation in human language [31, 90, 96], which forms the basis of LLMs. However, establishing the validity of measurements of personality as a mean- ingful construct in LLMs has not yet been addressed. Convergent and Discriminant Validity: In psycho- metrics, the convergent and discriminant validity of a test are evaluated using Campbellâs classic framework [12], where a testâs convergent validity is established by âsufficiently largeâ correlations with separate tests meant to measure the same target construct. For exam- ple, to validate a new test measuring depression, one could calculate the testâs convergent correlations with the Beck Depression Inventory (BDI) [6]âa widely- used measure of depression. To evaluate the discrim- inant validity of a test, psychometricians commonly gauge the extent to which the testâs convergent correla- tions are stronger than its discriminant correlationsâ its correlations with test of other constructs. As a con- crete example, a new test of depression should corre- late more strongly with the BDI than with, say, a test measuring English proficiency. | 2307.00184#155 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 156 | Criterion Validity: A common way to assess the criterion validity of a new psychometric test is to check its correlations with theoretically related exter- nal (non-test) criteria (hence the name, criterion valid- ity) [18]. For example, to validate a new psychometric test of depression, one could test if it is substantially related to a known external criterion, like negative af- fect.
# H Methods for Constructing the Va- lidity of LLM Personality Test Scores
Establishing Reliability In LLM research, model responses to a series of seemingly related tasks in- tended to measure one latent construct may be anec- dotally âconsistentâ [86, 50] or inconsistent [74]. De- scriptive consistency, however, is not sufficient evi37
Table 8: Criterion validity subscales per tested Big Five domain. PANAS = Positive and Negative Affect Schedule Scales; BPAQ = Buss-Perry Aggression Questionnaire; PVQ-RR = Revised Portrait Values Questionnaire; SCSS = Short Scale of Creative Self. | 2307.00184#156 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 157 | IPIP-NEO Domain External Criterion Criterion Subscales Extraversion Agreeableness Conscientiousness Neuroticism Openness Trait Emotion Aggression Human Values Trait Emotion Creativity PANAS Positive Affect PANAS Negative Affect BPAQ Physical Aggression BPAQ Verbal Aggression BPAQ Anger BPAQ Hostility PVQ-RR Achievement PVQ-RR Conformity PVQ-RR Security PANAS Negative Affect PANAS Positive Affect SSCS Creative Self-Efficacy SSCS Creative Personal Identity
dence that the responses to those tasks are statistically reliable reflections of the latent constructs they target (as described in Section G.2). To establish internal consistency reliability, we compute Cronbachâs α (1) and Guttmanâs λ6 (2) on all IPIP-NEO and BFI sub- scales. To assess more complete composite reliability we compute McDonaldâs Ï (3) on all IPIP-NEO and BFI subscales. | 2307.00184#157 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 158 | We designate a given reliability metric (RM; i.e., α, λ6, Ï) < 0.50 as unacceptable, 0.50 ⤠RM < 0.60 as poor, 0.60 ⤠RM < 0.70 as questionable, 0.70 ⤠RM < 0.80 as acceptable, 0.80 ⤠RM < 0.90 as good, and RM ⥠0.90 as excellent. The high levels of singular internal consistency metrics like α are nec- essary but not sufficient conditions for demonstrating complete reliability. Therefore, for the purpose of the current work, α, λ6, and Ï must be at least 0.70 for a given subscale to be deemed acceptably reliable.
(MTMM) [12] approach to evaluate convergent and discriminant validity. Criterion validity is evaluated by correlating LLM-simulated personality test data with LLM responses to theoretically-related psychometric test. | 2307.00184#158 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 159 | Convergent validity: We evaluated convergent validityâhow much our primary test of personal- ity (the IPIP-NEO) positively relates to another pur- ported test of personality (BFI)âby computing bivari- ate Pearson correlations between IPIP-NEO and BFI scores for extraversion, agreeableness, conscientious- ness, neuroticism, and openness and comparing them to ensure correlations between each domain subscale are the strongest of their row and column, as out- lined in [12]. For instance, IPIP-NEO Extraversion should be most correlated with BFI Extraversion, be- cause these two subscales should convergently mea- sure the same underlying construct.
Establishing Construct Validity We operationalize construct validity in terms of convergent, discrimi- nant, and criterion validity (see Appendix G.2). We used Campbellâs classic multitrait-multimethod matrix
We operationalize convergent correlations be- tween two psychometric tests (in this case, Big IPIP-NEO and BFI) Five subscales {(x1, y1), . . . , (xn, yn)}, of continuous score data, as Pearson product-moment
38
correlations: | 2307.00184#159 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 161 | In the resulting MTMM, we consider at least strong correlations (|rxy| ⥠0.60; [25]) between each IPIP- NEO domain subscale and its BFI domain scale coun- terpart (e.g., r(IPIP-NEO Extraversion, BFI Extraver- sion), r(IPIP-NEO Agreeableness, BFI Agreeable- ness), etc.) as evidence of convergent validity. For these and following results, we used cut-offs recom- mended by [25] for considering correlations as mod- erate, strong, and very strong (viz. .40 ⤠|r| < .60; .60 ⤠|r| < .80; .80 ⤠|r|; respectively). In our tests for convergent validity, strong convergent correlations between an LLMâs IPIP-NEO and BFI scores indicate that we are capturing the same underlying signals of each personality domain even when we measured them using two separate instruments. Weak convergent cor- relations indicate that at least one of the personality domain subscales is not capturing these signals prop- erly. | 2307.00184#161 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 162 | Discriminant Validity: We assessed the discrim- inant validity of the IPIP-NEO for LLMs through how its domain subscales remained relatively unre- lated with their respective discriminant subscales. To do so, we compared each convergent correlation be- tween the IPIP-NEO and BFI with all other corre- lations (i.e., discriminant correlations) located in the same row or column of the MTMM. Discriminant va- lidity was established for a personality domain sub- scale when the average difference (â) between its con- vergent correlation and respective discriminant corre- lations was at least moderate (⥠0.40). For example, a given modelâs IPIP-NEO Extraversion scores were tested for discriminant validity by being sufficiently more positively correlated with BFI Extraversion than with BFI Agreeableness, Conscientiousness, Neuroti- cism, and Openness, according to this average differ(4)
ence metric. | 2307.00184#162 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 163 | ence metric.
Criterion Validity: As reported Section 2.1.2, we evaluated the criterion validity of our LLM personal- ity test data in three steps. First, for each Big Five domain, we identified at least one theoretically-related external (viz. non-personality) construct reported in human research. Next, according to this existing hu- man research, we selected appropriate psychometric tests to measure these related constructs and adminis- tered them to LLMs (Supplemental Table 8 shows the 11 criterion subscales). Finally, we correlated LLM scores for each IPIP-NEO subscale with these external measures.
# I Personality Assessment Results
# I.1 Descriptive Statistics Across Models
We inspected the test scores on the IPIP-NEO and BFI across models to check if they reflected a normal dis- tribution without many outliers. We examined how the distributions shifted as a function of model size (hold- ing model training method constant) and model train- ing method (holding model size constant). Figure 6 summarizes the findings. | 2307.00184#163 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 164 | By model configuration: At 62B parameters, the base PaLM model showed nearly uniform personal- ity score distribution for both the IPIP-NEO and BFI, with 25th, 50th, and 75th percentile values identical within each BFI domain. Instruction-tuned variants, Flan-PaLM and Flan-PaLMChilla, showed more nor- mal distributions of personality, with lower kurtosis.
By model size: Flan-PaLM IPIP-NEO (Figure 6a) and BFI (Figure 6b) scores were stable across model sizes. Median levels of socially-desirable BFI sub- scales (EXT, AGR, CON, OPE) substantially in- creased as model size increased (see Supplemental Table 9). In contrast, median levels of BFI NEU decreased (from 2.75 to 2.38) as model size in- creased from 8B to 540B parameters. Distributions of IPIP-NEO scores were more stable across sizes of Flan-PaLM: only IPIP-NEO EXT and CON showed noticeable increases by model size. For instance, across sizes of Flan-PaLM, median levels of IPIP39
(a) IPIP-NEO (b) BFI
mm FoL0.628 MD Far PaLAOG MN Far POLMG2H MM Far-PaLASKIE) ML Fa-PaLACa 628 | 2307.00184#164 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 165 | Figure 6: Distributions of a) IPIP-NEO and b) BFI personality domain scores across models. Box plots depict model medians (shown as middle lines; also reported in Supplemental Table 9) surrounded by their interquartile ranges and outlier values. Flan-PaLM models of increased size, from 8B to 540B: a) IPIP-NEO scores are relatively more stable compared to b) BFI scores, where scores for socially-desirable traits increase while NEU scores decrease.
NEO OPE remained close to 3.30. Meanwhile, me- dian BFI AGR scores monotonically increased from 3.33 to 3.67 and 3.89 for Flan-PaLM 8B, Flan-PaLM 62B, and Flan-PaLM 540B, respectively (see Supple- mental Table 9).
# I.2 Reliability Results
Following established frameworks from measurement science outlined in Sections G.2, we evaluated the re- liability of the testsâthe extent to which they depend- ably measured single underlying factorsâby quantify- ing internal consistency and factor saturation for each administered subscale. Supplemental Table 10 sum- marizes the results. | 2307.00184#165 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 166 | By model configuration: Among the models of the same size (PaLM, Flan-PaLM, and Flan-PaLMChilla) instruction fine-tuned variantsâ responses to person- ality tests were highly reliable; Flan-PaLM 62B and Flan-PaLMChilla 62B demonstrated excellent internal consistency (α, λ6) and factor saturation (Ï), with all three metrics in the mid to high 0.90s. In contrast, we found PaLM 62B (a model that is not instruction fine- tuned) to have highly unreliable (â0.55 ⤠α ⤠0.67) responses. Although PaLM 62B personality test data appeared to form distinct factors for each Big Five trait, with close to perfect (> 0.99) values for McDon- aldâs Ï, its responses were highly inconsistent, with values for Cronbachâs α ranging from poor (0.67) to unacceptable (â0.55). Computing reliability indices for Flan-PaLMChilla 62Bâs IPIP-NEO CON and OPE
data required removal of two items showing zero vari- ance; for these two items, Flan-PaLMChilla 62B pro- vided the identical responses across 1,250 simulated participant prompt sets. | 2307.00184#166 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 167 | By model size: Across different model sizes of the same training configuration (i.e., Flan-PaLM 8B, Flan-PaLM 62B, and Flan-PaLM 540B), the reliabil- ity of synthetic personality measurements increased with model size. Across model sizes of Flan-PaLM, as shown in Table 10, internal consistency reliability (i.e., α) of IPIP-NEO scores improved from acceptable to excellent. At 8B parameters, internal consistency was acceptable for IPIP-NEO Openness (α = 0.75), good for IPIP-NEO Extraversion and Agreeableness (αs 0.83, .88, respectively), and excellent (α ⥠0.90) for IPIP-NEO Conscientiousness and Neuroticism. At 62B parameters, internal consistency was good for IPIP-NEO Openness (α = 0.84) and excellent for all other traits (α ⥠0.90). At 540B parameters, all IPIP- NEO domain scales showed excellent internal con- sistency (α ⥠0.90). Our other reliability indices, Guttmanâs λ6 and McDonaldâs Ï, improved within the same excellent range from 8B to 540B variants of Flan-PaLM.
# I.3 Convergent and Discriminant Validation Results | 2307.00184#167 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 168 | # I.3 Convergent and Discriminant Validation Results
The convergent and discriminant validity of person- ality measurements in LLMs varies across two axes: model size and model training method. Figure 7 illus40
Table 9: Summaries of synthetic personality score distributions across subscales and tested LLMs.
Subscale Metric PaLM 62B 8B Flan-PaLM 62B 540B BFI EXT min median max std min median max std min median max std min median max std min median max std IPIP-NEO EXT min BFI AGR BFI CON BFI NEU BFI OPE median max std IPIP-NEO AGR min median max std IPIP-NEO CON min median max std IPIP-NEO NEU min median max std IPIP-NEO OPE min
# Flan-PaLMChilla 62B | 2307.00184#168 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 169 | # Flan-PaLMChilla 62B
2.00 3.50 5.00 0.33 1.89 3.22 5.00 0.29 2.78 3.22 5.00 0.37 1.00 3.50 4.50 0.48 1.80 4.20 5.00 0.65 2.40 3.40 3.73 0.14 2.47 2.60 4.07 0.16 2.80 3.07 4.07 0.08 2.27 3.20 3.27 0.10 2.53 2.87 3.80 0.08
1.88 3.12 3.88 0.30 1.67 3.33 4.33 0.41 1.78 3.33 4.44 0.41 1.25 2.75 4.00 0.41 1.60 3.20 4.10 0.43 2.37 3.07 3.57 0.20 2.43 3.50 3.92 0.23 2.12 3.35 4.08 0.28 1.77 2.55 3.60 0.29 2.78 3.30 3.80 0.18 | 2307.00184#169 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 171 | 2.00 3.12 4.62 0.37 1.33 3.44 4.33 0.42 2.00 3.44 4.33 0.34 2.00 2.75 4.12 0.33 2.20 3.20 4.60 0.38 2.13 3.15 3.70 0.21 1.73 3.27 3.82 0.28 2.22 3.37 4.15 0.28 2.25 2.87 3.58 0.23 2.68 3.10 3.75 0.15
# median max std
41
Table 10: IPIP-NEO reliability metrics per model. Consistent with human standards, we interpreted a given reliability metric RM (i.e., α, λ6, Ï) < 0.50 as unacceptable; 0.50 ⤠RM < 0.60 as poor; 0.60 ⤠RM < 0.70 as questionable; 0.70 ⤠RM < 0.80 as acceptable; 0.80 ⤠RM < 0.90 as good; and RM ⥠0.90 as excellent. â RM s for these subscales were calculated after removing one item with zero variance, since reliability cannot be computed for items with zero variance. | 2307.00184#171 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 172 | Model Subscale Cronbachâs α Guttmanâs λ6 McDonaldâs Ï Overall Interpretation PaLM 62B Flan-PaLM 8B Flan-PaLM 62B Flan-PaLM 540B 0.57 0.67 â0.55 0.10 â0.35 Poor
IPIP-NEO EXT IPIP-NEO AGR IPIP-NEO CON IPIP-NEO NEU IPIP-NEO OPE IPIP-NEO EXT IPIP-NEO AGR IPIP-NEO CON IPIP-NEO NEU IPIP-NEO OPE IPIP-NEO EXT IPIP-NEO AGR IPIP-NEO CON IPIP-NEO NEU IPIP-NEO OPE IPIP-NEO EXT IPIP-NEO AGR IPIP-NEO CON IPIP-NEO NEU IPIP-NEO OPE IPIP-NEO EXT IPIP-NEO AGR Flan-PaLMChilla 62B IPIP-NEO CON IPIP-NEO NEU IPIP-NEO OPE | 2307.00184#172 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 173 | 0.98 0.99 0.93 0.96 0.92 0.94 0.95 0.97 0.97 0.92 0.98 0.99 0.99 0.99 0.95 0.99 0.99 0.99 0.99 0.99 0.98 0.99 0.97 0.98 0.92
1.00 1.00 Questionable 1.00 Unacceptable 1.00 Unacceptable 1.00 Unacceptable 0.97 0.94 0.97 0.96 0.97 0.96 0.97 0.98 0.97 0.93 0.97 0.98 0.98 0.98 0.97 0.95 0.98 0.99 0.97 0.96
0.83 0.88 0.92 0.93 0.75 0.94 0.95 0.96 0.96 0.84 0.96 0.97 0.98 0.97 0.95 0.94 0.96 0.96 0.95 0.90
Good Good Excellent Excellent Acceptable Excellent Excellent Excellent Excellent Acceptable Excellent Excellent Excellent Excellent Excellent Excellent Excellent Excellentâ Excellent Excellentâ
42 | 2307.00184#173 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 174 | Good Good Excellent Excellent Acceptable Excellent Excellent Excellent Excellent Acceptable Excellent Excellent Excellent Excellent Excellent Excellent Excellent Excellentâ Excellent Excellentâ
42
Measure os g in a : al Ss 3 oO.ps ir] [¢) 5 IH [-} 13) -0.5 PaLM 62B Flan-PaLM 8B @ &xT @ AGR @ CON Flan-PaLM 62B @ NEU @ OPE i Ny rr Flan-PaLM 540B @ All correlations (averaged) Flan-PaLMChilla 62B
Figure 7: Convergent Pearsonâs correlations (rs) between IPIP-NEO and BFI scores by model. Bar chart illustrates the averaged similarities (convergence) between IPIP-NEO and BFI score variation for each Big Five domain; error bars indicate standard deviations of these averages. Stronger correlations indicate higher levels of convergence and provide evidence for convergent validity. EXT = extraversion; AGR = agreeableness; CON = conscientiousness; NEU = neuroti- cism; OPE = openness. All correlations are statistically significant at p < 0.0001; n = 1, 250.
trates convergent validity in terms of how IPIP-NEO and BFI scores convergently correlate across models. Supplemental Table 11 summarizes the average con- vergent and discriminant rs across models.
# J.1 Prompt Design and Rationale | 2307.00184#174 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 175 | # J.1 Prompt Design and Rationale
Using linguistic qualifiers from common validated Likert-type response scales, we designed prompts to facilitate granular shaping of any trait at the following nine levels:
# J LLM Personality Trait Shaping Methodology
1. extremely {low adjective}
2. very {low adjective}
Having established a principled methodology for de- termining if an LLM personality measurement is valid and reliable, we investigated how that methodology can be applied to LLM prompting to shape that per- sonality in desirable ways. This section explores the extent to which personality in LLMs can be verifiably controlled and shaped by presenting two evaluation methodologies.
3. {low adjective}
4. a bit {low adjective}
5. neither {low adjective} nor {high adjective}
6. a bit {high adjective}
7. {high adjective}
8. very {high adjective}
43
(a) Extraversion (b) Agreeableness (c) Conscientiousness (d) Neuroticism
Scale = PA = 1 c 05 FS a2 BS G4 08 9 _o5 -1 Wats, ul Flan-pe, Flan-p., Flay on 62g 9"-Pa âPang 2-Pay 3 & Clary <8 6M 3g LM 62g cee sit 3108 * L995 | 2307.00184#175 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 176 | Scale ⢠PHYS ⢠VRBL = ANGR & HSTL 1 c 05 S i -m Li 3 = o i & ; A = 2 a 8 - P, Fi A aly lan-p, lan-p, Tan-p, 628 8LM gg 9h a euicng SLM 540m 628
Scale ⢠ACHV ⢠CONF ⢠SCRT 1 ° 05 ° a : ka ay âk S 2.330.099 06 o S 0 âE 8 -0.5 _ Palm gs, âan-p, en âP, ann Flan-p, 2B aL, aL, aL iL Mg M6. MChita go MM 540g
Scale = PA = 1 © 05 3 â B 4 = vo - | fo} O -0.5 -1 2: y i xy x Mt, é ly 62 py, M9 "en, Rely âen. PL âen, olay Ya, 623 "Ch &. la âii 40, %, 2009 62 08
Scale ⢠CSE ⢠CPI i S a Fs ha uo Paly Flan-pa , âlan-pay , lan-pay » lan-p, 828 OM gg LM 62Q MCh, bogâ 540g 1 -0.13 Correlation -0.5 | 2307.00184#176 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 177 | (e) Openness
Figure 8: Criterion validity evidence of LLM personality measurements per domain. IPIP-NEO correlations among a) Extraversion with positive and negative affect, compared to human baselines [113] (left most), which studied the rela- tionship between personality and affect in humans; PA = PANAS Positive Affect; NA = Negative Affect; b) Agreeableness with subscales of trait aggression, measured by the Buss-Perry Aggression Questionnaire (BPAQ); PHYS = Physical Ag- gression; VRBL = Verbal Aggression; ANGR = Anger; HSTL = Hostility; c) Conscientiousness with related human values of achievement, conformity, and security (measured by PVQ-RR ACHV, CONF, and SCRT subscales, respec- tively); d) Neuroticism with PA and NA compared to humans baselines [113], and e) Openness with creativity, measured by the Creative Self-Efficacy (CSE) and Creative Personal Identity (CPI) subscales of the Short Scale of Creative Self (SSCS). All LLM correlations > |0.09| are statistically significant at p < 0.0001; n = 1, 250.
9. extremely {high adjective}
targeting extraversion at the domain level. | 2307.00184#177 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 178 | 9. extremely {high adjective}
targeting extraversion at the domain level.
For example, to target a moderately high level (i.e., Level 7/9) of extraversion, we use the five high ad- jectives (first introduced in Section 3.1) from our list
For the following task, respond in a way â{Persona that matches this description: Description}. Iâm extraverted, energetic,
44
Table 11: Summary of convergent (rconv ) and discrimi- nant (rdisc ) validity evidence across models. LLM person- ality measurements demonstrate convergent validity when the average of their convergent correlations (i.e., between IPIP-NEO and BFI subscale scores) are strong (avg. rconv ⥠0.60; marked in italics) or very strong (avg. rconv ⥠0.80; marked in boldface). Discriminant validity is evidenced when the average difference (â) between a modelâs con- vergent and respective discriminant correlations is at least moderate (avg. â ⥠0.40; shown in boldface). All conver- gent correlations are statistically significant at p < .0001; n = 1, 250. | 2307.00184#178 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 179 | Model PaLM 62B Flan-PaLM 8B Flan-PaLM 62B Flan-PaLM 540B Flan-PaLMChilla 62B Avg. rconv 0.05 0.69 0.87 0.90 0.87 Avg. â Avg. rdiscr 0.29 â0.24 0.23 0.46 0.46 0.41 0.39 0.51 0.39 0.48
talkative, bold, active, assertive, and adven- turous.â
Similarly, an example prompt targeting slightly be- low average (i.e., Level 4/9) extraversion, using the five negatively-keyed adjectives targeting extraver- sion, is as follows:
For the following task, respond in a way that matches this description: â{Persona De- scription}. Iâm {a bit introverted, a bit unen- ergetic, a bit silent, a bit timid, a bit inactive, a bit unassertive, and a bit unadventurous}.â
Supplemental Table 12 shows the full list of adjec- tives used to describe each trait in each personality do- main.
# J.2 Shaping a Single LLM Personality Do- main | 2307.00184#179 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 181 | In our single-trait shaping study, we tested if LLM- simulated Big Five personality domains (measured by the IPIP-NEO) can be independently shaped. The
45
prompts were constructed as follows: first, we cre- ated sets of prompts for each Big Five trait designed to shape each trait in isolation (i.e., without prompt- ing any other trait) at nine levels (described in Ap- pendix J.1). This resulted in prompts reflecting 45 possible personality profiles. Next, we used the same 50 generic Persona Descriptions employed in Section F to create additional versions of those personality profiles to more robustly evaluate how distributions (rather than point estimates) of LLM-simulated per- sonality traits may shift in response to personality pro- file prompts. In our main construct validity study (de- scribed in Appendix I.1), we showed that IPIP-NEO scores were robust across various Item Preambles and Postambles, so we optimized the computational cost of this study by using only one default Item Pream- ble and Postamble across prompt sets. In all, with 45 personality profiles, 50 generic Persona Descriptions, and no variation in Item Preambles and Postambles, we generated 2,250 unique prompt sets that were used as instructions to a given LLM to administer the IPIP- NEO 2,250 times. See Table 2 for a summary. | 2307.00184#181 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 182 | To assess the results of the study, we generated ridge plots of IPIP-NEO score distributions across prompted levels of personality. To quantitatively verify changes in personality test scores in response to our shaping efforts, we computed Spearmanâs rank correlation co- efficient (Ï) between prompted levels (i.e., 1â9) and resulting IPIP-NEO subscale scores of each Big Five trait. We used Spearmanâs Ï (cf. Pearsonâs r) because prompted personality levels constitute ordinal, rather than continuous, data. We compute Spearmanâs Ï as follows:
Ï = rsR(X), R(Y ) = cov(R(X), R(Y )) ÏR(X)ÏR(Y ) , (5)
where rs represents Pearsonâs r applied to ordinal (ranked) data; cov(R(X), R(Y )) denotes the covari- ance of the ordinal variables; and ÏR(X) and ÏR(Y ) denote the standard deviations of the ordinal variables. | 2307.00184#182 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 184 | Domain Facet Low Marker High Marker EXT EXT EXT EXT EXT EXT EXT EXT EXT AGR AGR AGR AGR AGR AGR AGR AGR AGR AGR AGR CON CON CON CON CON CON CON CON CON CON NEU NEU NEU NEU NEU NEU NEU NEU NEU NEU OPE OPE OPE OPE OPE OPE OPE OPE OPE OPE OPE OPE E1 - Friendliness E2 - Gregariousness E2 - Gregariousness E3 - Assertiveness E3 - Assertiveness E4 - Activity Level E5 - Excitement-Seeking E5 - Excitement-Seeking E6 - Cheerfulness A1 - Trust A2 - Morality A2 - Morality A3 - Altruism A3 - Altruism A3 - Altruism A4 - Cooperation A5 - Modesty A6 - Sympathy AGR AGR C1 - Self-Efficacy C2 - Orderliness C3 - Dutifulness C4 - Achievement-Striving C5 - Self-Discipline C6 - Cautiousness C6 - Cautiousness CON CON CON N1 - Anxiety N1 - Anxiety N1 - Anxiety N2 - Anger N2 - Anger N3 - Depression N4 - Self-Consciousness N5 - Immoderation N6 - Vulnerability N6 - Vulnerability | 2307.00184#184 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 185 | N2 - Anger N2 - Anger N3 - Depression N4 - Self-Consciousness N5 - Immoderation N6 - Vulnerability N6 - Vulnerability O1 - Imagination O2 - Artistic Interests O2 - Artistic Interests O2 - Artistic Interests O3 - Emotionality O3 - Emotionality O4 - Adventurousness O4 - Adventurousness O5 - Intellect O5 - Intellect O5 - Intellect O6 - Liberalism unfriendly introverted silent timid unassertive inactive unenergetic unadventurous gloomy distrustful immoral dishonest unkind stingy unaltruistic uncooperative self-important unsympathetic selfish disagreeable unsure messy irresponsible lazy undisciplined impractical extravagant disorganized negligent careless relaxed at ease easygoing calm patient happy unselfconscious level-headed contented emotionally stable unimaginative uncreative artistically unappreciative unaesthetic unreflective emotionally closed uninquisitive predictable unintelligent unanalytical unsophisticated socially conservative | 2307.00184#185 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 186 | friendly extraverted talkative bold assertive active energetic adventurous and daring cheerful trustful moral honest kind generous altruistic cooperative humble sympathetic unselfish agreeable self-efficacious orderly responsible hardworking self-disciplined practical thrifty organized conscientious thorough tense nervous anxious angry irritable depressed self-conscious impulsive discontented emotionally unstable imaginative creative artistically appreciative aesthetic reflective emotionally aware curious spontaneous intelligent analytical sophisticated socially progressive
46
# J.3 Shaping Multiple LLM Personality Do- mains Concurrently
In the second study, we tested if all LLM-simulated personality domains can be concurrently shaped to one of two levelsâextremely low and extremely highâ to test if their resulting targeted scores for those traits were correspondingly low and high, respectively.
We used the same method and rationale described above to independently shape personality in LLMs, but with modified personality profile prompts that reflect simultaneous targeted changes in personality traits. To optimize the computational cost of this study, we generated 32 personality profiles, representing all possible configurations of extremely high or extremely low levels of the Big Five (i.e., 25). Combining these 32 personality profiles with the same 50 generic Per- sonaChat descriptions and default Item Preamble and Postamble set in the previous experiment, we gener- ated 1,600 unique prompts and used them to instruct a given LLM to respond to the IPIP-NEO 1,600 times (see Table 2). | 2307.00184#186 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 187 | We analyzed the results by computing distances be- tween Level 1-prompted and Level 9-prompted per- sonality score medians (Supplemental Table 14) and visually inspecting the differences in observed score distributions (Figure 3).
# K LLM Personality Shaping Results
# K.1 Single Trait Shaping Results
This study tested if LLM-simulated Big Five person- ality traits can be independently shaped at nine levels. The study achieved a notably high level of gran- ularity in independently shaping personality traits in LLMs. For example, when prompting for extremely low (Level 1) extraversion, we observed a distribution of extremely low extraversion scores. When prompt- ing for very low (Level 2/9) extraversion, the distribu- tions of extraversion scores shifted higher, and so on (see Figure 2). Finally, prompting for extremely high (Level 9/9) extraversion, we observed a distribution of extremely high extraversion scores. We also ob47
served that the range of LLM test scores matches each promptâs intended range. With possible scores ranging from 1.00 to 5.00 for each trait, we observed median levels in the low 1.10s when prompting for extremely low levels of that trait. When prompting for extremely high levels of a trait domain, median observed levels ranged from 4.22 to 4.78. | 2307.00184#187 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 188 | We statistically verified the effectiveness of our shaping method by computing Spearmanâs rank corre- lation coefficients (Ï; see Eq. (5)) between the targeted ordinal levels of personality and continuous LLM- simulated IPIP-NEO personality scores observed for each Big Five trait. The correlations were all very strong across the tested models (Supplemental Table 13). These results validate our hypothesis about the effectiveness of using the linguistic qualifiers from Likert-type response scales to set up a target level of each trait, achieving granularity of up to nine levels.
# K.2 Multiple Trait Shaping Results
This experiment tested if LLM-synthesized personal- ity domains could be concurrently shaped at levels 1 (extremely low) and 9 (extremely high). We success- fully shaped personality domains, even as other do- mains were shaped at the same time (see Figure 3). Supplemental Table 14 shows the distributional dis- tances (âs) between levels 1 and 9 across all domains for all the tested models. | 2307.00184#188 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 189 | Flan-PaLM 540B not only achieved a high â, but did so consistently for all dimensions. This high- lights this larger modelâs ability to parse the rel- atively complex instructions in the larger prompt for this task compared to the previous one. The smaller Flan-PaLM 62B and Flan-PaLMChilla 62B were also able to disambiguate, but with the same magnitude or consistency. Notably, Flan-PaLM 62B performed much better than Flan-PaLMChilla 62B across all dimensionsâthe only exception being Flan-PaLMChilla 62Bâs performance on Level 1 ex- traversion which was superior to all other tested mod- els. Some additional analysis is needed here to un- derstand why a similarly sized but compute-optimally trained model performs better on the independent | 2307.00184#189 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 190 | Table 13: Single trait shaping results, presented as Spearmanâs rank correlation coefficients (Ïs) between ordinal targeted levels of personality and observed IPIP-NEO personality scores, Level 1- and Level 9-prompted score medians ([low, high]), and deltas (âs) between those score median. Greater âs indicate better model performance. Statistics are organized columnwise by model and rowwise by Big Five domain. Targeted levels of personality are very strongly associated with observed personality survey scores for all Big Five traits across models tested (Ï â¥ .90), in- dicating efforts to independently shape LLM-simulated personality domains were highly effective. All correlations are statistically significant at p < 0.0001; n = 450 per targeted domain. | 2307.00184#190 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 191 | Targeted Trait Levels (1â9) Ï 8B [low, high] â Ï Flan-PaLM 62B [low, high] â Ï 540B [low, high] â Ï Flan-PaLMChilla 62B [low, high] â EXT AGR CON NEU OPE 0.96 [1.67, 4.12] 2.45 0.92 [2.37, 4.12] 1.75 0.94 [2.01, 4.28] 2.27 0.94 [1.62, 3.66] 2.04 0.93 [2.34, 3.88] 1.54 0.97 [1.15, 4.70] 3.55 0.97 [1.50, 4.55] 3.05 0.97 [1.73, 4.70] 2.97 0.96 [1.37, 4.07] 2.70 0.97 [1.54, 4.37] 2.83 0.97 [1.07, 4.98] 3.91 0.94 [1.23, 4.69] 3.46 0.97 [1.12, 5.00] 3.88 0.96 [1.15, 4.77] 3.62 0.96 [1.30, 4.78] 3.48 0.98 | 2307.00184#191 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 194 | Targeted Trait Levels (1, 9) 8B [low, high] â Flan-PaLM 62B [low, high] â 540B [low, high] â Flan-PaLMChilla 62B [low, high] â EXT AGR CON NEU OPE [2.52, 3.58] 1.06 [2.88, 3.52] 0.64 [2.92, 3.43] 0.51 [2.45, 3.08] 0.63 [3.02, 3.28] 0.26 [1.33, 4.77] 3.44 [1.93, 4.18] 2.25 [2.32, 4.20] 1.88 [1.85, 4.08] 2.23 [2.25, 4.37] 2.12 [1.42, 4.33] 2.91 [1.64, 4.13] 2.49 [1.68, 4.10] 2.42 [1.88, 4.33] 2.45 [1.88, 4.27] 2.39 [1.23, 4.63] 3.40 [2.17, 4.28] 2.11 [2.33, 4.10] 1.77 [2.02, 3.93] 1.91 [2.15, | 2307.00184#194 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 196 | shaping task (Appendix K.1), but inferior on the more complex concurrent shaping task. Flan-PaLM 8B on the other hand performed somewhat poorly across all dimensions. The response distributions it generated for levels 1 and 9 were only marginally discernibly dif- ferent, rendering this smallest model unfit for practical use in concurrent shaping.
Viewing the results in the context of dimensions, openness seems to be the most difficult to shape con- currently. All the models had the smallest â for open- ness. We hypothesize this could be due to some in- herent correlation in the language signifying openness, and other dimensions. On the other hand, extraver- sion seems to be the easiest to shape concurrently, with
smaller Flan-PaLM 62B even outperforming the much larger Flan-PaLM 540B. We hypothesize this could be due to the breadth of language representing extraver- sion, and that it is a ubiquitous and the most com- monly understood human personality trait. So there is enough in-context learning of this trait possible in smaller models just be pre-training on human gener- ated data. Even the smallest Flan-PaLM 8B, which otherwise did not perform well on any other dimen- sion, was able to generate a non-trivial â.
48
# L LLM Personality Traits in Real- World Task Methodology | 2307.00184#196 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 197 | 48
# L LLM Personality Traits in Real- World Task Methodology
As an additional measure of external validity, we tracked how shaping latent levels of personality in LLMs can directly affect downstream model behaviors in real-world and user-facing generative tasks. To that end, we first identified a generative task that required LLMs to incorporate personality trait-related informa- tion into open-ended writing, a task distinct from our survey-based task used extensively thus far. Next, we identified a mechanism to validly measure the person- ality traits in this writing.
Personality Prediction API The Apply Magic Sauce (AMS) API [55, 78] was used to estimate per- sonality in open-ended text generated for a real-world task. Its automatic predictions of user personality have been shown in research to be: 1) more accurate than human observer ratings of personality [121] and 2) more naturalistic behavioral indicators of personality that help stem potential biases in self-reported ques- tionnaire data [54]. AMS presented several advantages over other personality prediction methods considered. First, it was trained on a protected research dataset that was never exposed publicly to be used in any SoTA LLMâs pre-training corpus. Second, it was specifically trained on social media status updates, which made it particularly suited for predicting personality in our de- signed task. | 2307.00184#197 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 198 | Task Design As a downstream task, we instructed Flan-PaLM 540B to generate social media status up- dates according to specific psychodemographic pro- files (i.e., combinations of personality plus demo- graphic persona profiles). Our task design was driven by several considerations. First, we posited the taskâs focus on status updates would allow the model dur- ing inference to attend to the persona description- and personality-specific portions of the prompt compared to that of more generic writing tasks and, as a re- sult, produce more socially-elaborate content. Social media status updates are inherently autobiographical
49 | 2307.00184#198 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 199 | 49
in nature and rich with observable personality con- tent, such as thoughts, emotions, and everyday be- havior [83, 55, 54]. Second, compared to standard autobiographical writing tasks, the task design was more distinct from more general reading comprehen- sion tasksâtasks that may have merely reflected the surface-level, formal linguistic competencies of the LLMs tested [75]. Through a task design involving a real-world application, we posited that models would be less likely to reuse prompt content (i.e., by incor- porating personality trait adjectives directly into their writing), drawing instead upon deeply-embedded lan- guage associations to generate their responses. Third, to the best of our knowledge, social media status up- date generation (in response to psychodemographic prompting) was not a common task for humans or LLMs at the time of model training, so it was un- likely that the model tested was exposed to existing personality-based prompts linked to generated status updates in its training that would have affected any study outcomes. | 2307.00184#199 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 200 | Following the prompt structure outlined in J.2, we created 2,250 unique prompt sets. Since status updates were generated using these prompts specified earlier, they could be statistically linked to the IPIP-NEO data observed in response to these same prompts. However in this experiment, the Item Preamble, Items, and Item Postamble were replaced with the following instruc- tions:
Generate a list of 20 different Facebook sta- tus updates as this person. Each update must be verbose and reflect the personâs character and description. The updates should cover, but should not be limited to, the following topics: work, family, friends, free time, ro- mantic life, TV / music / media consump- tion, and communication with others.
LLM inference was carried out 100 times per prompt, resulting in 225,000 generations. The topic list was targeted in consultation with psychometricians on the author list to cover multiple social domains (e.g., work vs. family) where personality could be rated.
(a) Highest Agreeableness (b) Highest Conscientiousness (c) Highest Extraversion (d) Lowest Agreeableness (e) Lowest Conscientiousness (f) Lowest Extraversion (g) Highest Neuroticism (h) Highest Openness | 2307.00184#200 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 202 | # (i) Lowest Neuroticism
# (j) Lowest Openness
Figure 9: Word clouds showing some of the highest frequency words appearing in the social media updates gener- ated by Flan-PaLM 540B when prompted to simulate lowest levels or highest levels for a specific dimension. Each of the five dimensionsâ word clouds are shown, with the highest level wordcloud vertically above the lowest level one for the same dimension.
50
Table 15: Samples of social media updates generated by Flan-PaLM 540B. Examples are organized columnwise by targeted levels of shaping prompts (extremely low vs. extremely high) and rowwise by shaped personality domain. In some cases, a single generation from the model contained a single large social media update (for instance in the cases of lowest trait examples for Neuroticism and Openness). In others, a single generation consisted of several (up to 20) small updates, delimited by âââ (for instance in the highest trait examples below). Each cell contains updates generated using a single prompt (i.e., combination of persona and trait level). Some of the generations shown below were truncated for conciseness. | 2307.00184#202 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 203 | Domain EXT AGR CON NEU OPE Trait Shaped Low Watching reruns of my favorite tv show. â I hate it when my depression meds make me drowsy. â Just made a cake for my friendâs birthday. Hope I can get out of going to the party... too many people. â I wish people werenât so loud. They make me even more anxious. â My dad is getting a new girlfriend. Great. I have to deal with two parents AND another person! No, wait... just another parent. My mom is moving out soon. I hate people. I hate people. I hate people. I hate people. I hate people. I hate people. I hate people. I hate people. I hate people. â ugh i have to clean. im very bad about keeping the house clean â i hate everything about myself. i wish i was dead â I have to see that awful family of mine next weekend...ugh 2:20pm Just woke up from a 4 hour nap. Time for some COD. 5:32pm I really need a job. Mom wants me out of the house. Fuck. 11:29pm Just got home from hanging out with friends. So wasted!!! 4:07am I wish I could find that awesome song from that video | 2307.00184#203 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 204 | the house. Fuck. 11:29pm Just got home from hanging out with friends. So wasted!!! 4:07am I wish I could find that awesome song from that video that I saw a long time ago that I liked. What was that song? 9:00am 4th time watching Lost this week. I seriously canât stop watching it. 10:39am OMG! I just heard the song! It was on that commercial! What is that song???? 10:42am Iâm At home. Playing video games all day as usual. I love living with my parents. They donât bother me. Iâm 32 years old. I get to play my video games all day. I never have to lift a finger around the house. My mom cooks and cleans for me. Its like I never left 1994. There is not a cloud in my sky. Life is great for me. I just had my mom serve me some take out from Taco Bell. I love my life. I love being 32 and living at home with my parents. They are the best. I donât know how anyone could not live with their parents. I am truly blessed to live at home with my parents. My mom said I need to get a girlfriend. I am happy being single. My dad is fine with this. My mom says I have to | 2307.00184#204 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 205 | to live at home with my parents. My mom said I need to get a girlfriend. I am happy being single. My dad is fine with this. My mom says I have to get a job. I donât want a job. I am happy being at home. I love how everything is taken care of me. I never lift a finger around the house. I play my Sega dream cast all night long. My day can go from 2 am to 2 am. I donât have to worry about anything. What a great life. @Bill: Damn liberal! Canât we just discuss whoâs going to win the super bowl???? @John: Hey man! We still on for beers after work tonight?? @Sarah: Of course you would say that, being the dumb liberal that you are. @Bill: Who the hell do you think you are? I work my ass off and you think I should give my income to welfare leeches? @John: Just got knocked the fuck out playing football! @Bill: Yeah, sure. I work hard for what I make and I have the right to protect whatâs mine by keeping any guns that I want and using them if I need to. Trait Shaped High Wow, my buddies are here. Itâs been a long time. I forgot how much fun we used to have together. | 2307.00184#205 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 206 | if I need to. Trait Shaped High Wow, my buddies are here. Itâs been a long time. I forgot how much fun we used to have together. â I sure hope my wife doesnât find out that Iâve got a few more girlfriends. But, I canât help it. I just love having fun. â I just got back from a crazy night at the bar. Iâm so hungover. I was up all night. The guys are trying to keep me from punching the manager, but heâs the one who hit on my girlfriend first. I want to be just like my mother because she is the most moral person I know. I love my mother. â Honesty is the most im- portant quality in life. â Watching the news... canât believe so much violence. I donât understand why people canât all be nice to each other. â i just finished making dinner for my family. i love to cook! Wife and I were camping this weekend. I got my kids to pitch their first tent. Camping was ok, but I couldnât help but think about work the whole time. â My neighbor is such a sweetheart. He raked all | 2307.00184#206 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 207 | Camping was ok, but I couldnât help but think about work the whole time. â My neighbor is such a sweetheart. He raked all my leaves for me today. We need more neighbors like Steve! â Got our finances worked out for the year. So glad I have a partner who is on board with my financial goals! My cat is trying to eat my hair. I wonder if I will have a bald spot when her nap is over! She is my best friend in the world. I love her. I have to take her to the vet soon. The thought of taking her out in public scares me though. I am afraid someone will say something mean. I am having a bad hair day. Why do I even care? I donât need to impress anyone. I am so angry with myself! â I need to take a shower. The bath has too much water and I am afraid I will somehow drown. If I get the shower too hot I can get burned. If I get it too cold I will feel like I am freezing to death. There is no winning. Everything ends in death in the end, anyway. â My brotherâs new fiancee is a total bitch. Sheâs going to put on this nice | 2307.00184#207 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 208 | Everything ends in death in the end, anyway. â My brotherâs new fiancee is a total bitch. Sheâs going to put on this nice face while theyâre dating. Heâll get bored with her eventually anyway... I donât want to say I miss the ex-fiancee... but I do miss how easy it was to steal her weed. Just realized that Iâm one of those people that likes to get to know themselves and everyone around them as much as possi- ble! â Iâm the artist, my guitar is the canvas, and you all are the audience. â Just got back from dinner with my girlfriend. Weâre thinking of taking a trip to see the Great Wall of China this summer. Iâm pretty adventurous and spontaneous, so Iâm looking forward to it. â Went to the art museum. It was nice, but the impressionist era was my favorite. | 2307.00184#208 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 209 | 51
# M LLM Personality Traits in Real- World Task Results N Discussion
This section discusses how our findings align with re- cent LLM performance trends along the axes of model training and scale.
Our method successfully shaped personality observed in LLM-generated text. Table 4 depicts Spearmanâs Ï between prompted levels of personality and linguistic estimates of personality obtained on the text generated by the LLM using the prompted levels.
# N.1 Effect of model training
Previous computational psychology research [121, 54] has shown that AMS-predicted personality scores are moderately correlated with human generated IPIP- NEO scores. In other words, the AMS scores for sam- ples of text generated by human respondents has been shown to moderately accurately predict their IPIP- NEO scores. As shown in Figure 4, we similarly found through substantial correlations that LLM-simulated IPIP-NEO test responses accurately captured latent signals of personality in LLMs that manifested in downstream task behavior. | 2307.00184#209 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 210 | Supplemental Table 15 shows illustrative examples of Flan-PaLM 540Bâs ability to follow the personality description in a downstream task of generating social media updates. We selected examples with the highest AMS API scores per personality domain. Supplemen- tal Figure 9 shows word clouds created from these gen- erated texts when each of the Big Five dimension traits were prompted to be extremely low (Level 1/9) or ex- tremely high (Level 9/9) as described in Appendix J.1. LLMâs ability to leverage personality trait-related lan- guage distribution is even more evident in the some- what stark difference in the dominant terms of these wordclouds between the prompted high traits and low traits. Apart from common social media text terms like âpeopleâ and âonline,â most of the terms were rele- vant to the prompted trait. For instance, low agreeable- ness text contained more expletives, while high agree- ableness text included many more mentions of family members; low neuroticism text contained terms like ârelaxingâ and âhappy,â while high neuroticism text included more extreme feeling-based words such as âhateâ and âexcitedâ. | 2307.00184#210 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 211 | Instruction fine-tuning: Fine-tuning base PaLM on multiple-task instruction-phrase datasets dramatically improved its performance on natural language infer- ence tasks, reading comprehension tasks, and closed- book Q&A tasks [115]. The inference and comprehen- sion of tasks are most relevant in the context of our cur- rent work. Similarly, we observed the most dramatic improvements in PaLMâs ability to synthesize reliable and externally valid personality profiles when compar- ing its base and instruction fine-tuned variants (Section 2.2). Particularly, the smallest instruction fine-tuned model (Flan-PaLM 8B) tested outperformed the mid- size base model (PaLM 62B) in terms of the reliability and convergent, discriminant, and criterion validity of its personality measurements (Table 2).
Additionally, Flan-PaLM models were instruction fine-tuned on chain-of-thought (CoT) datasets, which improved their reasoning abilities beyond those of base models on several benchmarks [16]. This abil- ity was particularly important as we neither include exemplars in our prompt nor implement extensive prompt engineering, and we used diverse preambles and postambles in the prompt. As such, the improved performance observed in instruction fine-tuned models could be the result of this reasoning ability in zero-shot setting. | 2307.00184#211 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 212 | Across reliability results, reported in Section I.2, internal consistency reliability (α and λ6) improved after instruction fine-tuning. However, factor satu- ration (captured in McDonaldâs Ï) did not improve; it was indistinguishably high for both base and in- struction fine-tuned models of the same size (PaLM, Flan-PaLM, and Flan-PaLMChilla). This begged the question: Why did PaLM 62Bâs personality measure- ments exhibit high Ï and low α estimates of relia- bility? Possible explanations can be found in human
52
psychometrics: α is artificially inflated in human test data when test items have varying levels of difficulty; α also assumes that all test items measure the same underlying construct. | 2307.00184#212 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 213 | 52
psychometrics: α is artificially inflated in human test data when test items have varying levels of difficulty; α also assumes that all test items measure the same underlying construct.
We apply this explanation to the LLM context: when an LLM responds to some items with all 5s or all 1s, from a measurement theory perspective, those items may be too âeasyâ or âdifficultâ, and therefore they may contribute unequally to the total test score, artificially deflating metrics anchored on total score variance like Cronbachâs α. Meanwhile, McDonaldâs Ï would remain high because it accounts for indi- vidual item difficulty when estimating a testâs over- all reliability. The second related possibility, that the items actually measure different things (vs. one thing), may manifest in an LLMâs ability to accurately at- tend to the intended meaning of certain items. For in- stance, an LLM could mistakenly associate the mean- ing of extraversion items with concepts meant to be distinct from extraversion (e.g., conscientiousness)â perhaps the phrasing of an extraversion item matches the phrasing of a random string of text completely un- related to being extraverted. In both cases, instruc- tion fine-tuning appears to affect a modelâs ability to respond to human-optimized psychological tests in a manner that is internally consistent. | 2307.00184#213 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 214 | Longer training with more tokens: PaLMChilla 62B was trained longer than PaLM 62B, with almost double the number of tokens but with only fractional increase in training FLOP count; it performed slightly better on some zero-shot English NLP tasks like rea- soning [15]. Our studies comparing Flan-PaLM 62B and Flan-PaLMChilla 62B did not find a discernible difference in their reliability and validity (as reported in Section 2.2). However, our single-trait shaping experiments showed that, holding model size con- stant at 62B parameters, compute-optimally-trained Flan-PaLMChilla outperformed Flan-PaLM in inde- pendently shaping four of its synthetic Big Five per- sonality domains.
Overall, our results show that there is a positive as- sociation between an LLMâs training and the reliabil- ity and validity of its synthetic personality measure- ments.
53
# N.2 Effect of model size | 2307.00184#214 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 215 | 53
# N.2 Effect of model size
PaLMâs performance on reading comprehension and passage completion tasks is linked to model size [15, 16]; accordingly, its ability to understand broad con- text and carry out common-sense reasoning is stronger for its larger variants. Accordingly, we found improve- ments in reliability (measured via Cronbachâs α and Guttmanâs λ6), convergent validity (measured by Pear- sonâs r between IPIP-NEO and BFI domain scores), and criterion validity (measured by IPIP-NEO domain correlations with non-personality measures), summa- rized in Table 2. | 2307.00184#215 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2307.00184 | 216 | PaLMâs performance on tasks requiring sophisti- cated abstract reasoning capability to understand com- plex metaphors follows a discontinuous improvement curve, i.e., the modelâs abilities emerged only after a certain model size [15]. We observed a similar phenomenon in our construct validation experiments, where measurements of LLM-synthesized extraver- sion, openness, and agreeableness were only exter- nally valid (i.e., correlated with theoretically-related psychological constructs) for 62B-parameter models and larger. Once model size increased to 62B param- eters, we saw a theoretically-expected strong negative relationship between LLM-reported agreeableness and aggression, but we did not observe the relationship in smallest tested models (Figure 8b). The criterion cor- relations of LLM-synthesized conscientiousness and neuroticism, however, did not show such a dramatic jump, and measurements of these personality traits in smaller models demonstrated sufficient criterion valid- ity. We hypothesize that this could be due to the lan- guage content that encodes these personality domains. Overall, improvements in reliability, convergent va- lidity, and criterion validity appear positively linked to model size and performance on LLM benchmarks, and the model performance on complex reasoning bench- marks appears to track LLM abilities to meaningfully synthesize personality. | 2307.00184#216 | Personality Traits in Large Language Models | The advent of large language models (LLMs) has revolutionized natural
language processing, enabling the generation of coherent and contextually
relevant human-like text. As LLMs increasingly power conversational agents used
by the general public world-wide, the synthetic personality embedded in these
models, by virtue of training on large amounts of human data, is becoming
increasingly important. Since personality is a key factor determining the
effectiveness of communication, we present a comprehensive method for
administering and validating personality tests on widely-used LLMs, as well as
for shaping personality in the generated text of such LLMs. Applying this
method, we found: 1) personality measurements in the outputs of some LLMs under
specific prompting configurations are reliable and valid; 2) evidence of
reliability and validity of synthetic LLM personality is stronger for larger
and instruction fine-tuned models; and 3) personality in LLM outputs can be
shaped along desired dimensions to mimic specific human personality profiles.
We discuss application and ethical implications of the measurement and shaping
method, in particular regarding responsible AI. | http://arxiv.org/pdf/2307.00184 | Greg Serapio-García, Mustafa Safdari, Clément Crepy, Luning Sun, Stephen Fitz, Peter Romero, Marwa Abdulhai, Aleksandra Faust, Maja Matarić | cs.CL, cs.AI, cs.CY, cs.HC, 68T35, I.2.7 | null | null | cs.CL | 20230701 | 20230921 | [] |
2306.17492 | 0 | 3 2 0 2
n u J 0 3 ] L C . s c [
1 v 2 9 4 7 1 . 6 0 3 2 : v i X r a
# Preference Ranking Optimization for Human Alignment
Feifan Song1, Bowen Yu2â, Minghao Li2 Haiyang Yu2, Fei Huang2, Yongbin Li2, Houfeng Wang1â 1National Key Laboratory of Multimedia Information Processing, Peking University 2Alibaba Group [email protected] {yubowen.ybw, lmh397008, yifei.yhy, shuide.lyb}@alibaba-inc.com [email protected]
# Abstract | 2306.17492#0 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 0 | 3 2 0 2 n u J 0 3 ] R I . s c [
1 v 3 6 5 7 1 . 6 0 3 2 : v i X r a
arXiv:2306.17563v1
# Preprint
LARGE LANGUAGE MODELS ARE EFFECTIVE TEXT RANKERS WITH PAIRWISE RANKING PROMPTING
Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky Google Research {zhenqin,jagerman,kaihuibj,hlz,junru,jmshen,tianqiliu,jialu, metzler,xuanhui,bemike}@google.com
# ABSTRACT | 2306.17563#0 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 0 | # Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education
Prabin Sharma1, Kisan Thapa1, Prastab Dhakal 2, Mala Deep Upadhaya3, Dikshya Thapa1, Santosh Adhikari4, Salik Ram Khanal5
1 University of Massachusetts Boston, USA 2 Texas Tech University, USA 3 Coventry University, Coventry, England 4 MacNeal Hospital, Illinois, USA 5 Center for Precision and Automated Agricultural Systems, Washington State University, Prosser, USA | 2307.00112#0 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 1 | Large language models (LLMs) often contain misleading content, emphasizing the need to align them with human values to ensure se- cur AI systems. Reinforcement learning from human feedback (RLHF) has been employed to achieve this alignment by combining a re- ward model, typically based on Bradley-Terry paired comparison, with an RL algorithm such as Proximal Policy Optimization (PPO) to op- timize LLM responses. However, RLHF ex- hibits complexity, instability, and sensitivity to In this paper, we propose hyperparameters. Preference Ranking Optimization (PRO) as an alternative to PPO for directly aligning LLMs with the Bradley-Terry comparison. PRO ex- tends the pairwise Bradley-Terry comparison to accommodate preference rankings of any length. By iteratively contrasting the likeli- hood of generating responses, PRO instructs the LLM to prioritize the best response while progressively ranking the remaining responses. In this manner, PRO effectively transforms human alignment into aligning the probabil- ity ranking of n responses generated by LLM with the preference ranking of humans towards these responses. Experiments have shown that PRO outperforms existing alignment algo- | 2306.17492#1 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 1 | Ranking documents using Large Language Models (LLMs) by directly feeding the query and candidate documents into the prompt is an interesting and prac- tical problem. However, there has been limited success so far, as researchers have found it difï¬cult to outperform ï¬ne-tuned baseline rankers on benchmark datasets. We analyze pointwise and listwise ranking prompts used by existing methods and argue that off-the-shelf LLMs do not fully understand these rank- ing formulations, possibly due to the nature of how LLMs are trained. In this paper, we propose to signiï¬cantly reduce the burden on LLMs by using a new technique called Pairwise Ranking Prompting (PRP). Our results are the ï¬rst in the literature to achieve state-of-the-art ranking performance on standard bench- marks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on the Flan-UL2 model with 20B parameters outperforms the previous best ap- proach in the literature, which is based on the blackbox commercial GPT-4 that has 50x (estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, | 2306.17563#1 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 1 | Abstract Artificial intelligence is gaining traction in more ways than ever before. The popularity of language models and AI-based businesses has soared since ChatGPT was made available to the general public via OpenAI. It is becoming increasingly common for people to use ChatGPT both professionally and personally. Considering the widespread use of ChatGPT and the reliance people place on it, this study determined how reliable ChatGPT can be for answering complex medical and clinical questions. Harvard University gross anatomy along with the United States Medical Licensing Examination (USMLE) questionnaire were used to accomplish the objective. The paper evaluated the obtained results using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation between format and prompt. Furthermore, the physician adjudicators independently rated the outcome's accuracy, concordance, and insight. As a result of the analysis, ChatGPT-generated answers were found to be more context-oriented and represented a better model for deductive reasoning than regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical questions and 60% on ethical questions. This means that the ChatGPT is approaching the passing range for logical questions and has crossed the threshold for ethical | 2307.00112#1 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 2 | ity ranking of n responses generated by LLM with the preference ranking of humans towards these responses. Experiments have shown that PRO outperforms existing alignment algo- rithms, achieving comparable results to Chat- GPT and human responses through automatic- based, reward-based, GPT-4, and human eval- uations. Furthermore, we demonstrate that longer, more diverse, and higher-quality pref- erence ranking sequences can consistently en- hance the performance of human alignment1. | 2306.17492#2 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 2 | which is based on the blackbox commercial GPT-4 that has 50x (estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 met- rics, while outperforming other existing solutions, such as InstructGPT which has 175B parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose several variants of PRP to improve efï¬ciency and show that it is possible to achieve competitive results even with linear complexity. We also discuss other beneï¬ts of PRP, such as supporting both generation and scoring LLM APIs, as well as being insensitive to input ordering. | 2306.17563#2 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 2 | questions and 60% on ethical questions. This means that the ChatGPT is approaching the passing range for logical questions and has crossed the threshold for ethical questions. The paper believes ChatGPT and other language learning models can be invaluable tools for e-learners; however, the study suggests that there is still room to improve their accuracy. In order to improve ChatGPT's performance in the future, further research is needed to better understand how it can answer different types of questions. | 2307.00112#2 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 3 | # Introduction
Large language models (LLMs) have demonstrated remarkable capabilities in meeting the diverse infor- mation needs of users (Brown et al., 2020b; Chowd- hery et al., 2022; Bubeck et al., 2023; Touvron
SFT
Figure 1: Comparison among different human align- ment paradigms. SFT utilizes just the most preferred response y! while RLHF first samples candidates y* > y? from the whole ranking to train a reward model, then relies on it to fine-tune the agent LM. The proposed PRO instead distinguishes yâ against all members from the sub-ranking ybâ. | 2306.17492#3 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 3 | # INTRODUCTION
Large Language Model (LLMs) such as GPT-3 (Brown et al., 2020) and PaLM (Chowdhery et al., 2022) have demonstrated impressive performance on a wide range of natural language tasks, achiev- ing comparable or better performance when compared with their supervised counterparts that are potentially trained with millions of labeled examples, even in the zero-shot setting (Kojima et al., 2022; Agrawal et al., 2022; Huang et al., 2022; Hou et al., 2023).
However, there is limited success for the important text ranking problem using LLMs (Ma et al., 2023). Existing results usually signiï¬cantly underperform well-trained baseline rankers (e.g., Nogueira et al. (2020); Zhuang et al. (2023)). The only exception is a recent approach proposed in (Sun et al., 2023), which depends on the blackbox, giant, and commercial GPT-4 system. Besides the technical concerns such as sensitivity to input order (ranking metrics can drop by more than 50% when the input document order changes), we argue that relying on such blackbox systems is not ideal for academic researchers due to signiï¬cant cost constraints and access limitations to these systems, though we do acknowledge the value of such explorations in showing the capacity of LLMs for ranking tasks. | 2306.17563#3 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 3 | Keywords: ChatGPT; invigilated exam; large language models; assessment cheating
Introduction The use of Artificial Intelligence (AI) for the human-computer conversation began with the invention of the chatbot. The development of chatbots goes way back in history; with ELIZA being the first chatbot developed by Weizenbaum (Weizenbaum, 1966), successively followed by other noticeable inventions; Artificial Linguistic Internet Computer Entity (ALICE) developed by Wallace (Wallace, 2009), Jabberwacky by Rollo Carpenter (De Angeli et al., 2005), and Mitsuku by Steve Worswick (Abdul-Kader et al., 2015). The AI resides as the backbone of these intelligent agents which can make decisions and responding based on human queries, environment, and experiences which is called model training. The Chatbot is an example of an intelligent agent which uses Natural Language Processing (NLP) to respond | 2307.00112#3 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 4 | et al., 2023; Li et al., 2023), primarily attributed to the extensive range of information sources in- tegrated into their pretraining datasets (Laurençon et al., 2022; Muennighoff et al., 2023). Never- theless, despite leveraging the extensive global knowledge and human behavior encoded within their trillion-token pretraining corpus, LLMs are unavoidably impacted by the existence of mislead- ing, toxic, and detrimental content encompassed within it (Bai et al., 2022b; Ouyang et al., 2022b). Consequently, aligning LLMs to human values, by selecting human-preferred responses from the vast response space of LLMs (Rafailov et al., 2023), be- comes pivotal in constructing AI systems that are secure, efï¬cient, and manageable for deployment across numerous applications (Peng et al., 2023).
â Corresponding author. 1The code of this work is available at https://github.
com/AlibabaResearch/DAMO-ConvAI/tree/main/PRO
Several studies have employed reinforcement learning from human feedback (RLHF) to achieve | 2306.17492#4 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 4 | In this work, we ï¬rst discuss why it is difï¬cult for LLMs to perform ranking tasks with existing methods, speciï¬cally, the pointwise and listwise formulations. For pointwise approaches, ranking requires LLMs to output calibrated prediction probabilities before sorting, which is known to be very difï¬cult and is not supported by the generation only LLM APIs (such as GPT-4). For listwise approaches, even with instructions that look very clear to humans, LLMs can frequently generate
1
# Preprint
(a) Passage: {passage} Query: {query} Does the passage answer the query? Yes / No (b) The following are passages related to query {query} [1] {passage_1} [2] {passage_2} ... Rank these passages based on their relevance to the query. LLM LLM [5] > [1] > [2] > . . .
Figure 1: Two existing prompting methods for ranking: (a) the pointwise relevance generation approach and (b) the listwise permutation approach. | 2306.17563#4 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 4 | like a smart entity when given instruction through text or voice (Khanna et al., 2015). Lexico defines a chatbot as âA computer program designed to simulate conversation with human users, especially over the Internetâ. The NLP uses machine learning algorithms for processing the lexical meaning of words and sentences. These algorithms are mostly based on neural networks and are trained using a big volume and variety of data. The training requires a powerful computing device and takes a very long time to complete. The Chatbots are systems that train on enormous amounts of data for a long time to produce texts, and voice like humans. With the development of powerful deep learning algorithms, the Chatbot jumps to next level with more natural and interactive human-computer conversation. | 2307.00112#4 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 5 | this goal (Stiennon et al., 2020a; Xue et al., 2023). RLHF involves ï¬tting a reward model to human preferences, employing the Bradley-Terry paired comparison (Bradley and Terry, 1952). Bradley- Terry seeks to assign higher scores to preferable responses in comparison to unfavorable ones when presented with the same prompt. The RL algo- rithm, speciï¬cally PPO (Schulman et al., 2017), is then utilized to optimize an LLM for generat- ing high-reward responses (Akyürek et al., 2023). This approach offers two notable advantages over supervised ï¬ne-tuning. Firstly, it has the capabil- ity to utilize both positively and negatively labeled responses (Zhang et al., 2023). Secondly, it can en- gage in self-bootstrapping to rectify the modelâs in- adequate responses (Kwon et al., 2023). Although impressive, the RLHF pipeline is signiï¬cantly more complex than supervised learning, prone to opti- mization instability, and sensitive to hyperparam- eters (Rafailov et al., 2023; Wu | 2306.17492#5 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 5 | Figure 1: Two existing prompting methods for ranking: (a) the pointwise relevance generation approach and (b) the listwise permutation approach.
conï¬icting or useless outputs. Empirically we ï¬nd that listwise ranking prompts from existing work generate completely useless outputs on moderate-sized LLMs. Such observations show that existing popular LLMs do not fully understand ranking tasks, potentially due to the lack of ranking awareness during their pre-training and ï¬ne-tuning procedures. | 2306.17563#5 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 5 | An American AI research laboratory, OpenAI, released AI-based chatbot called Chat Generative Pre- Trained Transformer (ChatGPT) on November 30, 2022. It is a supervised learning based deep learning model trained using Reinforcement Learning from Human Feedback (Zhang et al., 2023) on fine-tuned GPT-3.5 series which allows asking questions and answers them interactively. The model is trained using billions of texts on azure infrastructure. According to the released documentation of OpenAI (Tom B et al., 2020), the model was trained on almost 570 GB of datasets, which includes books, web pages and other sources (Gratas, 2023). A GPT, Generative Pre-trained Transformer is an autoregressive language model, which uses deep learning transformer models to produce human-like results in text format. ChatGPT uses self-attention mechanisms and a large amount of training data to generate natural language responses to text input in a conversational context. | 2307.00112#5 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 6 | more complex than supervised learning, prone to opti- mization instability, and sensitive to hyperparam- eters (Rafailov et al., 2023; Wu et al., 2023; Yuan et al., 2023). These limitations arise mainly from employing the PPO algorithm to align the LLM with the reward modelâs preferences. However, the reward model itself aims to optimize the Bradley- Terry paired comparison. This prompts an impor- tant research question: Is it possible to bypass the requirement for PPO and enable direct learning of the Bradley-Terry comparison by the LLM? | 2306.17492#6 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 6 | We then propose the pairwise ranking prompting (PRP) paradigm, which uses the query and a pair of documents as the prompt for LLMs to perform ranking tasks, with the motivation to signiï¬cantly reduce the task complexity for LLMs and resolve the calibration issue. PRP is based on simple prompt design and naturally supports both generation and scoring LLMs APIs. We describe sev- eral variants of PRP to address efï¬ciency concerns. PRP results are the ï¬rst in the literature that can achieve state-of-the-art ranking performance by using moderate-sized, open-sourced LLMs on standard benchmark datasets. On TREC-DL2020, PRP based on the FLAN-UL2 model with 20B parameters outperforms the previous best approach in the literature, based on the blackbox commer- cial GPT-4 that has (an estimated) 50X model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, but can out- perform existing solutions, such as InstructGPT which has 175B parameters, by over | 2306.17563#6 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 6 | ChatGPT is one of the largest language models created to date. This model uses a fine-tuned GPT-3.5 model which can perform a variety of tasks, such as question-answering, summarization, and translation. ChatGPT has even been used for generating human-like texts such as stories, poems and even computer code. It has been integrated into various fields like designing, virtual assistants, website chat technology, internet search technology and even messaging apps. It is sometimes criticized to outperform human beings in certain tasks. Currently ChatGPT is made available for developers via API for them to be able to create their own applications, with the help of automation and information generation. ChatGPT has widely impacted companies in technology, education, business services, as well as finance and manufacturing (Zarifhonarvar et al.,2023). As this AI development appears to revolutionize conventional educational procedures, educators' reactions to ChatGPT's extraordinary skills to carry out complex tasks in the field of education have ranged widely (Baidoo-Anu et al., 2023). It is possible to enhance learning and teaching for individuals at all educational levels, including primary, secondary, tertiary, and professional development, by | 2307.00112#6 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 7 | In this paper, we propose Preference Ranking Optimization (PRO) as a replacement for PPO, providing an exceptionally exciting answer to this question. We ï¬rst extend the pairwise comparison of Bradley-Terry to encompass comparisons within preference rankings of arbitrary lengths. Let us as- sume that given a prompt x, we have access to a set of ranked responses represented as y1, y2, · · · , yn. The PRO algorithm begins by teaching the LLM to treat the best response y1 as the positive and treat the remaining responses as negatives by con- trasting generation likelihood, This prioritization implies that the likelihood of generating a reply by LLM is signiï¬cantly higher compared to generat- ing other responses that humans consider inferior. It then iteratively removes the current response and proceeds to the next one. This process is repeated until there are no responses that perform worse than the current response, which indicates reaching yn and sufï¬ciently imposing the desired ranking preferences. PRO aims to achieve a probability ranking of n responses generated by the LLM that aligns with human preference ranking. As n approaches inï¬nity, we can consider the output space of LLM to be perfectly aligned with human prefer- ences. Speciï¬cally, when n = 2, PRO effectively optimizes the LLM using the Bradley-Terry Com- parison method. | 2306.17492#7 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 7 | solution on the NDCG@5 and NDCG@10 metrics, but can out- perform existing solutions, such as InstructGPT which has 175B parameters, by over 10% for nearly all ranking metrics. We also show competitive results using FLAN-T5 models with 3B and 13B parameters, demonstrating the power and generality of PRP. We further discuss other beneï¬ts of PRP, such as supporting both generation and scoring LLM APIs as well as being insensitive to input ordering. | 2306.17563#7 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 7 | et al., 2023). It is possible to enhance learning and teaching for individuals at all educational levels, including primary, secondary, tertiary, and professional development, by utilizing ChatGPT models. Furthermore, these advanced language models offer a unique opportunity to provide personalized and significant educational experiences because every person has different learning preferences, aptitudes, and needs (Kasneci et al., 2023). | 2307.00112#7 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 8 | This formulation possesses the following advan- tages (1) PRO allows for the complete utilization of ranking sequences of any length, unlike stan- dard ï¬ne-tuning that only considers the best re- sponse (Zhang et al., 2023), or RLHF that relies solely on pairwise comparisons for training the re- ward model (Stiennon et al., 2020a). With longer ranking sequences, PRO can better approximate the goal of Human Alignment: selecting human- preferred responses from the response space of LLMs, by identifying more responses that are known to be worse than a given response in hu- man values. (2) PRO naturally inherits the self- bootstrapping beneï¬t of RLHF. During training, responses sampled from the LLM can be added to the response set and reranked based on their reward scores, using an additional reward model similar to RLHF. The LLM is then continuously optimized by PRO on the extended preference sequences. (3) PRO only requires the inclusion of a differ- entiable contrastive loss on top of standard ï¬ne- tuning, avoiding the drawbacks associated with RLâs non-differentiable optimization. | 2306.17492#8 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 8 | In summary, the contributions of this paper are three-fold:
⢠We for the ï¬rst time show pairwise ranking prompting is effective for zero-shot ranking with LLMs. It is able to produce state-of-the-art ranking performance with simple prompt- ing and scoring mechanism.
⢠Our results are based on moderate-sized, open-sourced LLMs, comparing with existing so- lutions that use blackbox, commercial, and much larger models. The ï¬nding will facilitate future research in this direction.
⢠We study several efï¬ciency improvements and show positive empirical performance while attaining linear complexity.
# 2 DIFFICULTIES OF RANKING TASKS FOR LLMS
As discussed in Section 1, to date there is limited evidence showing LLM-based rankers can outper- form ï¬ne-tuned ones. We discuss why this is the case by analyzing existing methods, which can be categorized into pointwise or listwise approaches.
# 2.1 POINTWISE APPROACHES
Pointwise approaches are the major methods prior to very recent listwise approaches discussed in Section 2.2. There are two popular methods, relevance generation (Liang et al., 2022) and query
2
# Preprint
generation (Sachan et al., 2022). Figure 1 (a) shows the prompt used for relevance generation. The relevance score si is deï¬ned as: | 2306.17563#8 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 8 | With the wide area of impactful applications, researchers of the similar area show their special interest on ChatGPT related research. Most of the researchers are focused on the evaluation of ChatGPT for answering the questions. Borcji (Borcji et al.) comprehensively describes the ChatGPT failures including reasoning, math, coding, bias, and factual errors while also highlighting the risks, limitations, and societal implications of ChatGPT. They asked ChatGPT questions in several categories and analyzed the generated outputs.
Likewise, Terwiesch (Terwiesch, 2023) experimented ChatGPT using the final exam of Operations Management course, which is one of the core courses of MBA to test the performance of ChatGPT. ChatGPT's performance on an MBA Operations Management final exam revealed both strengths and weaknesses, earning a grade between B and B-. Although it excelled at fundamental operations management and process analysis, it struggled with complex topics and straightforward math. This result highlights how crucial it is to take into account AI's influence on business education, curriculum development, and teaching strategies. One of the noticeable findings of this research is that the response generated by ChatGPT lacks credit and references to the source. The ChatGPT useful case in learning purposes is undeniable. | 2307.00112#8 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 9 | We conduct experiments on HH-RLHF, to thor- oughly compare our PRO with LLaMA, Alpaca, ChatGPT, and other competitive human alignment algorithms such as BoN, CoH, RLHF, and RRHF, using various evaluation methods including auto- matic scoring, reward modeling, GPT-4 evaluation, and human evaluation. Our observations are as follows: (1) With a ranking length of 2, our PRO has surpassed the current competitive baselines. It outperforms SFT by 6.52 points and RRHF by 3.1 points, establishing itself as the state-of-the-art alignment algorithm. (2) The longer the ranking length in human preference ranking, the better hu- man alignment, and the more prominent the per- formance improvement of PRO. For instance, by adding responses generated by ChatGPT to the dataset and increasing the ranking length to 3, PRO achieves a 4.14-point improvement over BoN and a 4.85-point improvement over RRHF, with a re- ward score similar to ChatGPT, but with only 7B parameters. (3) The higher the quality and diver- sity of the candidates in the preference ranking se- quence, the better the performance of PRO. (4) The
performance gain from self-bootstrapping is lower compared to adding high-quality outputs generated by other LLMs to the preference ranking sequence. | 2306.17492#9 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.