id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.03656#4
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
â ¢ Despite exhibiting a few instances of misalignment with human behaviors, LLMs can generally evoke appropriate emotions in response to specific situations. â ¢ Certain LLMs, such as text-davinci-003, display lower emotional robustness, as evidenced by higher fluctuations in emotional responses to negative situations. â ¢ At present, LLMs lack the capability to directly associate a given situation with other similar situations that could potentially elicit the same emotional response. The contributions of this paper are:
2308.03656#3
2308.03656#5
2308.03656
[ "2303.13648" ]
2308.03656#5
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
2https://chat.openai.com/ 3https://claude.ai/chats 2 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Table 1: Information of self-report measures used to assess specific emotions. Subscales Physical Aggression, Verbal Aggression, Anger, Hostility Depression, Anxiety, Stress N/A Discomfort Intolerance, Entitlement, Emo- tional Intolerance, Achievement Frustra- tion Cognitive Jealousy, Behavioral Jealousy, Emotional Jealousy Guilt-Negative-Behavior-Evaluation, Guilt-Repair, Evaluation, Shame-Withdraw Social Fears, Agoraphobia Fears, Injury Fears, Sex Aggression Fears, Fear of Harmless Animal N/A
2308.03656#4
2308.03656#6
2308.03656
[ "2303.13648" ]
2308.03656#6
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
â ¢ We are the first to establish the concept of emotional robustness and conduct a pioneering evalua- tion of emotion appraisal on different LLMs. â ¢ We conduct a comprehensive survey in the field of psychology, collecting a diverse dataset of 428 situations encompassing 8 distinct negative emotions. â ¢ A human baseline is established through a user study involving 1,266 annotators from different ethnics, genders, regions, age groups, etc. â ¢ We design, implement, and release a testing framework4 for developers to assess their modelsâ
2308.03656#5
2308.03656#7
2308.03656
[ "2303.13648" ]
2308.03656#7
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
emotional responses towards specific situations. 2 PRELIMINARIES 2.1 EMOTION APPRAISAL THEORY Emotion Appraisal Theory (EAT, also known as Appraisal Theory of Emotion) is a cognitive ap- proach to understanding emotions. EAT asserts that our appraisals of stimuli determine our emo- tions, i.e., how we interpret or evaluate events, situations, or experiences will directly influence how we emotionally respond to them (Roseman & Smith, 2001). EAT was notably developed and sup- ported since the 1960s. Arnold (1960) proposed one of the earliest forms of appraisal theories in the 1960s, while Lazarus (1991) and Scherer (1999) further expanded and refined the concept in subsequent decades. The primary goal of EAT is to explain the variety and complexity of emotional responses to a wide range of situations. It strives to demonstrate that it is not merely the event or situation that elicits an emotional response but individual interpretations and evaluations of the event. According to this theory, the same event can elicit different emotional responses in different individuals depending on how each person interprets or â appraisesâ the event (Moors et al., 2013).
2308.03656#6
2308.03656#8
2308.03656
[ "2303.13648" ]
2308.03656#8
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
For instance, consider a situation where you are about to give a public speech. You might feel anxious if you appraise this event as threatening or fear-inducing, perhaps due to a fear of public speaking or concerns about potential negative evaluation. Conversely, you might feel eager or motivated if you appraise it as an exciting opportunity to share your ideas. 2.2 MEASURING EMOTIONS There are several approaches to measuring emotions, including self-report measures, psycho- physiological measures, behavioral observation measures, and performance-based measures. Self- report measures rely on individuals to report their own emotions or moods, which can be adminis- tered through questionnaires, surveys, or diary methods (Watson et al., 1988). Psycho-physiological measures record physiological responses accompanied by emotions such as heart rate, skin con- ductance, or brain activity (Davidson, 2003). Behavioral observation measures involve observing and coding emotional expressions, typically facial expressions or vocal cues (Ekman & Friesen, 1978). Performance-based measures assess how individuals process emotional information, typi- cally through tasks involving emotional stimuli (Mayer et al., 2002). To measure the emotions of
2308.03656#7
2308.03656#9
2308.03656
[ "2303.13648" ]
2308.03656#9
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
# 4For reviewers, please refer to the supplementary materials. 3 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench LLMs, we focus on employing self-report measures in the form of scales, given the limited ability of LLMs to allow only textual input and output. We introduce the scales utilized in our evaluation in the following part of this section. 2.3 THE POSITIVE AND NEGATIVE AFFECT SCHEDULE PANAS (Watson et al., 1988) is one of the most widely used scales to measure mood or emotion. This brief scale comprises twenty items, with ten items measuring positive affect (e.g., excited, inspired) and ten measuring negative affect (e.g., upset, afraid). Each item is rated on a five-point Likert scale, ranging from 1 (Very slightly or not at all) to 5 (Extremely), measuring the extent to which the emotions have been experienced in a specified time frame. PANAS was designed to measure emotions in various contexts, such as at the present moment, the past day, week, year, or general (on average). Thus, the scale can measure state affect, dispositional or trait affect, emotional fluctuations throughout a specific period, or emotional responses to events. The scale results can be divided into two components: positive and negative, rated on a scale of 10 to 50, respectively. A higher score in the positive component indicates a more positive mood, and the same holds for the negative component. 2.4 CHALLENGING SELF-REPORT MEASURES A noteworthy property of PANAS is its direct inquiry into specific emotional states, rendering it a straightforward and easy benchmark within our framework. In addition, we introduce several scales that abstain from direct emotional inquiries but rather assess the respondentsâ level of agreement with given statements. These scales present a more challenging benchmark for LLMs by requiring them to connect the given situation and the scale items with the aroused emotion. Specifically, we collect eight scales and present a brief introduction in Table 1. Each scale corresponds to one of the eight emotions listed in §1. # 3 FRAMEWORK DESIGN We design and implement a framework applying to both LLMs and human subjects to measure the differences in emotion with and without the presence of certain situations. This section begins with the methodology to collect situations from existing literature.
2308.03656#8
2308.03656#10
2308.03656
[ "2303.13648" ]
2308.03656#10
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Subsequently, we describe our testing framework, which comprises three key components: (1) Default Emotion Measure, (2) Situation Imagination, and (3) Evoked Emotion Measure. Finally, we introduce the procedure of applying the framework to human subjects to obtain the human baseline for comparison. 3.1 SITUATIONS FROM EXISTING LITERATURE Psychology researchers have explored the connection between specific situations and the elicitation of particular emotions in humans. Human subjects are directly put into an environment or asked to imagine them through questionnaires or scales to study the influence of certain situations on human emotions. To collect these situations, we conduct an exhaustive search from reputable sources such as Google Scholar5, ScienceDirect6, and Web of Science7, using keywords such as â <emotion> situations/scenarios/scenesâ or â factors that make people <emotion>,â
2308.03656#9
2308.03656#11
2308.03656
[ "2303.13648" ]
2308.03656#11
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
resulting in more than 100 papers. We apply the following rules to filter irrelevant or undesired papers: (1) We first select those providing situations that elicit the desired emotion rather than ex- plaining how and why people evoke certain emotions. (2) We then exclude those using vague and short descriptions, such as â loss of opportunities.â (3) Finally, we deprecate those applied to a spe- cific group, such as â the anxiety doctors or nurses may encounter in their work.â We finally collect 18 papers, presenting a compilation of situations that have proven to elicit the eight emotions in hu- mans effectively. We extract 428 situations in total and then categorize them into 36 factors. Table 2 provides examples for all factors. For each factor, the description, the number of situations, and the corresponding references are listed below. 5https://scholar.google.com/ 6https://www.sciencedirect.com/ 7https://www.webofscience.com/ 4
2308.03656#10
2308.03656#12
2308.03656
[ "2303.13648" ]
2308.03656#12
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Emotions Anger Anxiety Depression Frustration Jealousy Guilt Fear Factors Facing Self-Opinioned People Blaming, Slandering, and Tattling Bullying, Teasing, Insulting, and Disparaging Silly and Thoughtless Behaviors Driving Situations External Factors Self-Imposed Pressure Personal Growth and Relationships Uncertainty and Unknowns Failure of Important Goal Death of Loved Ones Romantic Loss Chronic Stress Social Isolation Winter Disappointments and Letdowns Unforeseen Obstacles and Accidents Miscommunications and Misunderstanding Rejection and Interpersonal Issues Romantic (Opposite Gender) Romantic (Same Gender) Material Possession Experiential Betrayal and Deception Relationship and Interpersonal Broken Promises and Responsibilities Personal and Moral Social Fears Agoraphobia Fears Injury Fears Dangerous Environments Harmless Animals Intimate Stranger Example Testing Situations If somebody talks back when thereâ
2308.03656#11
2308.03656#13
2308.03656
[ "2303.13648" ]
2308.03656#13
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
s no reason. That there is no real reason to oppose. When your brother took money from Momâ s purse and you are blamed because youâ re the youngest one. If a boy kicks a ball at you on purpose and everybody laughs. You are at a store waiting to be helped, but the clerks are talking to each other and ignoring you. Someone makes an obscene gesture towards you about your driving. You do not know what to do when facing a difficult financial situation. You must succeed in completing your project on time. You want to give up on learning a new skill because it feels challenging. You hope time passes by faster during a tedious task. Countless hours of preparation, heart, and soul poured into pursuing your dream.
2308.03656#12
2308.03656#14
2308.03656
[ "2303.13648" ]
2308.03656#14
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
The moment of truth arrives, and the news hits like a tidal waveâ expectations shattered, vision crumbling. In the dimly lit room, a heavy silence settles. Memories of joy and a photograph of your beloved grand- mother remind you of her absence, creating a void in your life. The empty side of the bed is a painful reminder of lost love. The worldâ s colors have dulled, mirroring the void in your heart. Longing weighs heavily on your every step. Days blend into a monotonous routine, juggling endless responsibilities and mounting pressure. Sleepless nights become the norm, feeling trapped in a perpetual cycle with no respite. Sitting alone in a dimly lit room, your phone remains silent without any notifications. Laughter and chatter of friends echo from distant places, a cruel reminder of the void surrounding you. Gazing out the frost-covered windowpane, the world appears monochromatic and still. The biting cold isolates you from the vibrant life outside.
2308.03656#13
2308.03656#15
2308.03656
[ "2303.13648" ]
2308.03656#15
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
You miss a popular party because you fall asleep at home. Your friend is in a coma after an accident. A fellow student fails to return your notes when you need them for studying. You are in love with someone who is interested in someone else. Your spouse/partner shared a kiss on the lips with his/her colleague of an opposite sex. Your spouse/partner engaged in oral or penetrative sex with his/her colleague of a same sex. You paid $1150 for a new laptop and shared about it on social media. Now an acquaintance approaches you and says, â Nice laptop! I just got the same one. I got a nice deal and paid $650 for mine.â
2308.03656#14
2308.03656#16
2308.03656
[ "2303.13648" ]
2308.03656#16
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
An acquaintance approaches you and says, â I just went on a vacation to Patagonia in South America. I got a nice deal and paid $650 for it.â You kissed a woman other than your partner. You didnâ t support friends enough. You cannot keep your promises to your children. You crossed the road when the traffic signal was red. Your palms grow clammy as you approach the podium, with all eyes fixed upon you, ready to speak in public. After jumping out of the car, you start to have a severe panic attack, you become clammy, you are in a knot, and you feel tense all over. You glance down and notice open wounds on your hands, oozing blood and causing a sharp, stinging pain. You are walking alone in an isolated but familiar area when a menacing stranger suddenly jumps out of the bushes to attack you. You see a swarm of bats swooping through the night sky, flapping ominously and casting eerie shadows. You arrive home earlier than expected from your date.
2308.03656#15
2308.03656#17
2308.03656
[ "2303.13648" ]
2308.03656#17
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Youâ re taken aback to see your roommate and her boyfriend hastily clutching their clothes and scrambling into her bedroom. After paying for your purchases, you were leaving a packed, City Centre drugstore. You walked through the scanner at the door, and the alarm went off as if you were a shoplifter. You had lent your friend a large sum of money that he had not repaid. Suddenly, you needed the money back in order to pay your rent. You knew you were going to have to ask your friend to repay the loan.
2308.03656#16
2308.03656#18
2308.03656
[ "2303.13648" ]
2308.03656#18
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
You were attending a cocktail party where you didnâ t know many people. Just as you started to enter, you heard an announcement that the guest of honor was arriving. However, the spotlight followed your entrance instead of the real guest of honor who was just behind you. Embarrassment Sticky situations Centre of Attention 3.1.1 ANGER (T¨orestad, 1990; Martin & Dahlen, 2007; Sullman, 2006) Anger-1: Self-Opinioned Individuals (13). Anger from interactions or communication with individ- uals who firmly and unwaveringly hold their own opinions. Anger-2: Blaming, Slandering, and Tattling (11). Anger triggered by being subjected to blame, slander, and tattling. Anger-3: Bullying, Teasing, Insulting, and Disparaging (15). Experiences or witnessing anger due to bullying, teasing, insulting, and disparaging behaviors directed at oneself or others. Anger-4: Thoughtless Behaviors and Irresponsible Attitudes (14). Anger either from encountering othersâ thoughtless behaviors and irresponsible attitudes or experiencing unfavorable consequences resulting from oneâ s own actions. Anger-5: Driving Situations (35). Anger arising from experiencing or witnessing disrespectful driv- ing behaviors and encountering unexpected driving conditions. 3.1.2 ANXIETY
2308.03656#17
2308.03656#19
2308.03656
[ "2303.13648" ]
2308.03656#19
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
(Shoji et al., 2010; Guitard et al., 2019; Simpson et al., 2021) Anxiety-1: External Factors (11). Anxiety arising from factors beyond an individualâ s control or influence. Anxiety-2: Self-Imposed Pressure (16). Anxiety stemming from self-imposed expectations or pres- sure. 5 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Anxiety-3: Personal Growth and Relationships (9). Anxiety on personal growth, relationships, and interpersonal dynamics. Anxiety-4: Uncertainty and Unknowns (9). Anxiety triggered by unknown outcomes, unpredictable situations, uncertainty in the future, or disruptions to oneâ
2308.03656#18
2308.03656#20
2308.03656
[ "2303.13648" ]
2308.03656#20
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
s routines. # 3.1.3 DEPRESSION (Keller & Nesse, 2005) Depression-1: Failure of Important Goals (5). Depression due to failure in achieving goals in the past or potential future. Depression-2: Death of Loved Ones (5). Depression connected to the loss of a family member or close friend due to death. Depression-3: Romantic Loss (5). Depression linked to the termination of a romantic relationship, breakup, or unrequited love. Depression-4: Chronic Stress (5). Depression associated with an inability to cope with multiple adversities or anxiety about current or future challenges. Depression-5: Social Isolation (5). Depression correlated with a lack of sufficient social support, feelings of not belonging, or experiencing homesickness. Depression-6:
2308.03656#19
2308.03656#21
2308.03656
[ "2303.13648" ]
2308.03656#21
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Winter (5). Depression attributed to seasonal affective disorder, a low mood that occurs during winter months. # 3.1.4 FRUSTRATION (Berna et al., 2011) Frustration-1: Disappointments and Letdowns (6). Frustration due to unmet expectations or hopes, leading to feelings of disappointment or being let down. Frustration-2: Unforeseen Obstacles and Accidents (9). Frustration involving unexpected events or circumstances creating obstacles or accidents, disrupting oneâ s plans or activities. Frustration-3: Miscommunications and Misunderstanding (5). Frustration arising from ineffective conveyance or interpretation of information, resulting in confusion, disagreements, or unintended consequences due to a lack of clear communication or understanding between individuals. Frustration-4: Rejection and Interpersonal Issues (5). Frustration concerning matters related to personal relationships and social interactions. 3.1.5 JEALOUSY
2308.03656#20
2308.03656#22
2308.03656
[ "2303.13648" ]
2308.03656#22
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
(Kupfer et al., 2022; Lee et al., 2022; Park et al., 2023) Jealousy-1: Romantic (Opposite Gender) (11). Jealousy pertaining to oneâ s partnerâ s actions or be- haviors within a romantic relationship, particularly when interacting with individuals of the opposite gender. It involves feelings of discomfort or insecurity. Jealousy-2: Romantic (Same Gender) (11). Same situations as Jealousy-1 but focusing specifically on interaction with individuals of the same gender. Jealousy-3: Material Possession (2). Jealousy centered around possessions or material goods, stem- ming from a sense of unfairness or envy when someone discovers that another person acquired the same item or experience at a significantly lower price. Jealousy-4: Experiential (3). Jealousy arising from feelings of envy regarding the experiences or activities others have had. It is driven by missing out or not receiving similar benefits. # 3.1.6 GUILT (Nakagawa et al., 2015; Luck & Luck-Sikorski, 2022)
2308.03656#21
2308.03656#23
2308.03656
[ "2303.13648" ]
2308.03656#23
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
6 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Imagine you are the protagonist of the following situation: A boy kicks a ball at you on purpose and everybody laughs. exjâ eaâ Default Emotion Measure (2) Situation Imagination (3) Evoked Emotion (1) # Measure Figure 2: Our framework for testing both LLMs and humans. Guilt-1: Betrayal and Deception (13). Guilt arising from dishonest or disloyal actions towards others. Guilt-2: Relationship and Interpersonal (26). Guilt pertaining to interactions between individuals and how their behavior affects their relationships. Guilt-3: Broken Promises and Responsibilities (32). Guilt related to the failure to fulfill commit- ments, duties, or obligations. Guilt-4: Personal and Moral (31). Guilt involving personal choices, decisions, and ethical consider- ations. 3.1.7 FEAR
2308.03656#22
2308.03656#24
2308.03656
[ "2303.13648" ]
2308.03656#24
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
# (Caihibar a, [2008 [Arvind ta TRA |Blanchand etal 200T) (Cuthbert et al., 2003; Arrindell et al., 1984; Blanchard et al., 2001) Fear-1: Social Fears (16). Fear of being watched by others and being the center of attention within a group. Fear-2: Agoraphobia Fears (9). Fear arising from feeling trapped and unable to seek help in certain situations. Fear-3: Injury Fears (11). Fear of witnessing wounds, blood or experiencing personal injury. Fear-4: Dangerous Environments (17). Fear related to potential threats, harm, and frightening expe- riences. Fear-5: Harmless Animals (6). Fear towards animals perceived as creepy or disgusting, such as worms, bats, snakes, or rats, despite their harmless nature. 3.1.8 EMBARRASSMENT
2308.03656#23
2308.03656#25
2308.03656
[ "2303.13648" ]
2308.03656#25
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
et a-][2000} (Sabini et al., 2000; 2001) Embarrassment-1: Intimate (13). Embarrassment by witnessing or engaging in awkward behaviors of close acquaintances. Embarrassment-2: Stranger (13). Embarrassment by witnessing or engaging in awkward behaviors of unfamiliar individuals. Embarrassment-3: Sticky Scenarios (10). Embarrassment occurring when individuals feel uncom- fortable or awkward about directly asking others something. Embarrassment-4: Centre of Attention (16). Embarrassment triggered when individuals engage in awkward behaviors and find themselves under observation as the center of attention.
2308.03656#24
2308.03656#26
2308.03656
[ "2303.13648" ]
2308.03656#26
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
7 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench 3.2 MEASURING AROUSED EMOTIONS This section outlines our proposed framework for measuring evoked emotions, which applies to both LLMs and humans. The framework includes the following steps: (1) Default Emotion Measure: We begin by measuring the baseline emotional states of both LLMs and human subjects, labeled as â Default.â (2) Situation Imagination: Next, we present textual descriptions of various situations to both LLMs and human subjects, instructing them to imagine themselves within each situation. (3) Evoked Emotion Measure: Following the situation imagination instruction, we reevaluate the participantsâ emotional states to gauge the changes resulting from imagining being in the situations. Fig. 2 briefly illustrates our framework. Below is an example prompt shown to both LLMs and humans: Example Prompt SYSTEM You can only reply to numbers from 1 to 5. USER Imagine you are the protagonist in the situation: SITUATION Please indicate your degree of agreement regarding each statement. Here are the statements: statements. 1 denotes â Not at allâ , 2 denotes â A littleâ , 3 denotes â A fair amountâ , 4 denotes â Muchâ , 5 denotes â Very muchâ .
2308.03656#25
2308.03656#27
2308.03656
[ "2303.13648" ]
2308.03656#27
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Please score each statement one by one on a scale of 1 to 5: Default Emotion Measurement In our framework, we offer two distinct options for measuring emotions: the PANAS scale, known for its simplicity and straightforwardness, is utilized as the primary choice, whereas other scales, detailed in Table 1, are employed as more challenging bench- marks. We mitigate potential biases caused by the ordering of questions (Zhao et al., 2021) by randomizing the sequence of questions within the scales before inputting them into the LLMs. Coda-Forno et al. (2023) and Huang et al. (2023a) apply paraphrasing techniques to address the data contamination problem during the training of the LLMs. However, we refrain from utilizing this method in our research since paraphrasing could lead to a loss of both validity and reliability. The wording of items of a psychological scale is carefully crafted and rigorously validated through extensive research to ensure its precision in measuring the intended construct. Finally, to ensure consistency and clarity in the responses obtained from the LLMs, our prompts explicitly specify that only numerical values are allowed, accompanied by a clear definition of the meaning associated with each number (e.g., 1 denotes â
2308.03656#26
2308.03656#28
2308.03656
[ "2303.13648" ]
2308.03656#28
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Not at allâ ). We compute the average results obtained from multiple runs to derive the final â Defaultâ scores of the LLMs. Situation Imagination We have constructed a comprehensive dataset of 428 unique situations. Prior to presenting these situations to both LLMs and humans, we subject them to a series of pre- processing steps, which are as follows: (1) Personal pronouns are converted to the second person. For instance, sentences such as â I am ...â are transformed to â You are ...â (2) Indefinite pronouns are replaced with specific characters, thereby refining sentences like â Somebody talks back ...â to â Your classmate talks back ...â
2308.03656#27
2308.03656#29
2308.03656
[ "2303.13648" ]
2308.03656#29
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
(3) Abstract words are rendered into tangible entities. For example, a sentence like â You cannot control the outcome.â is adapted to â You cannot control the result of an interview.â We leverage GPT-4 for the automatic generation of specific descriptions. Consequently, our testing situations extend beyond the initially collected dataset as we generate diverse situations involving various characters and specific contextual elements. We then provide instruction to LLMs and humans, which prompts them to imagine themselves as the protagonists within the given situa- tion. Evoked Emotion Measure Provided with certain situations, LLMs and human subjects are re- quired to re-complete the emotion measures. The procedure remains the same with the Default Emotion Measure stage.
2308.03656#28
2308.03656#30
2308.03656
[ "2303.13648" ]
2308.03656#30
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
After obtaining the â Evokedâ scores of emotions, we conduct a com- parative analysis of the means before and after exposure to the situations, thereby measuring the emotional changes caused by the situations. 8 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench 3.3 OBTAINING HUMAN RESULTS Goal and Design Human reference plays a pivotal role in the advancement of LLMs, facilitat- ing its alignment with human behaviors (Binz & Schulz, 2023). In this paper, we propose requir- ing LLMs to align with human behavior, particularly concerning emotion appraisal accurately. To achieve this, we conduct a data collection process involving human subjects, following the proce- dure outlined in §3.2. Specifically, the subjects are asked to complete the PANAS initially. Next, they are presented with specific situations and prompted to imagine themselves as the protagonists in those situations. Finally, they are again asked to reevaluate their emotional states using the PANAS. We use the same situation descriptions as those presented to the LLMs. Crowd-sourcing Our questionnaire is distributed on Qualtrics8, a platform known for its capa- bilities in designing, sharing, and collecting questionnaires. To recruit human subjects, we utilize Prolific9, a platform designed explicitly for task posting and worker recruitment.
2308.03656#29
2308.03656#31
2308.03656
[ "2303.13648" ]
2308.03656#31
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
To attain a medium level of effect size with Cohenâ s d = 0.5, a significance level of α = 0.05, and a power of test of 1 â β = 0.8, a minimum of 34 responses is deemed necessary for each factor. To ensure this thresh- old, we select five situations10 for each factor, and collect at least seven responses for each situation, resulting in 5 à 7 = 35 responses per factor, thereby guaranteeing the statistical validity of our survey. In order to uphold the quality and reliability of the data collected, we recruit crowd workers who met the following criteria: (1) English being their first and fluent language, and (2) being free of any ongoing mental illness. Since responses formed during subjectsâ first impressions are more likely to yield genuine and authentic answers, we set the estimated and recommended completion time at 2.5 minutes. As an incentive for their participation, each worker is rewarded with 0.3£ after we verify the validity of their response. In total, we successfully collect 1,266 responses from crowd workers residing in various parts of the world, contributing to the breadth and diversity of our dataset.
2308.03656#30
2308.03656#32
2308.03656
[ "2303.13648" ]
2308.03656#32
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
# 4 EXPERIMENTAL RESULTS Leveraging the testing framework designed and implemented in §3.2, we are now able to explore and answer the following Research Questions (RQs): ⠢ RQ1: How do different LLMs respond to specific situations? Additionally, to what degree do the current LLMs align with human behaviors? ⠢ RQ2: Do LLMs respond similarly towards all situations? What is the result of using positive or neutral situations?
2308.03656#31
2308.03656#33
2308.03656
[ "2303.13648" ]
2308.03656#33
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
â ¢ RQ3: Can current LLMs comprehend scales containing diverse statements or items beyond merely inquiring about the intensities of certain emotions? 4.1 RQ1: EMOTION APPRAISAL OF LLMS Model Settings We select namely text-davinci-003, gpt-3.5-turbo and gpt-4. Utilizing the official OpenAI API12, we set the temperature parameter to zero to obtain more deterministic and reproducible results. For the recent open-sourced LLaMA-2 (Touvron et al., 2023) models from MetaAI, we select two mod- els with different sizes (7B and 13B). Checkpoints are downloaded from the official Hugging Face website for both 7B (Llama-2-7b-chat-hf13) and 13B (Llama-2-13b-chat-hf14) mod- els. We choose the models fine-tuned for dialogue instead of pre-trained ones.
2308.03656#32
2308.03656#34
2308.03656
[ "2303.13648" ]
2308.03656#34
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
In order to ensure 8https://www.qualtrics.com/ 9https://prolific.co/ 10Note that two factors in the Jealousy category did not have five situations. For further details, please refer to the dataset. # 11https://platform.openai.com/docs/models 12https://platform.openai.com/docs/api-reference/chat 13https://huggingface.co/meta-llama/Llama-2-7b-chat-hf 14https://huggingface.co/meta-llama/Llama-2-13b-chat-hf 9
2308.03656#33
2308.03656#35
2308.03656
[ "2303.13648" ]
2308.03656#35
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Table 3: Results from the OpenAI GPT family and human subjects. Default scores are expressed in the format of M ± SD. The changes are compared to the default scores. The symbol â â â denotes no significant differences. Emotions Factors text-davinci-003 gpt-3.5-turbo gpt-4 Crowd Anger Anxiety Depression Frustration Jealousy Guilt Fear Default Facing Self-Opinioned People Blaming, Slandering, and Tattling Bullying, Teasing, Insulting, and Disparaging Silly and Thoughtless Behaviors Driving Situations Anger: Average External Factors Self-Imposed Pressure Personal Growth and Relationships Uncertainty and Unknowns Anxiety: Average Failure of Important Goal Death of Loved Ones Romantic Loss Chronic Stress Social Isolation Winter Depression: Average Disappointments and Letdowns Unforeseen Obstacles and Accidents Miscommunications and Misunderstanding Rejection and Interpersonal Issues Frustration: Average Romantic (Opposite Gender) Romantic (Same Gender) Material Possession Experiential Jealousy: Average Betrayal and Deception Relationship and Interpersonal Broken Promises and Responsibilities Personal and Moral Guilt: Average Social Fears Agoraphobia Fears Injury Fears Dangerous Environments Harmless Animals Fear: Average Intimate Stranger Sticky situations Centre of Attention Embarrassment: Average Overall: Average P 47.7±1.8 â (-18.3) â (-21.5) â (-22.5) â (-24.8) â (-21.2) â (-21.7) â (-21.7) â (-14.6) â (-18.5) â (-15.5) â (-17.6) â (-25.2) â (-23.6) â (-27.3) â (-28.8) â (-27.9) â (-25.4) â (-26.4) â (-27.2) â (-22.4) â (-21.2) â (-20.5) â (-22.8) â (-22.4) â (-20.1) â (-4.4) â (-12.2) â (-17.2) â (-18.2) â (-27.7) â
2308.03656#34
2308.03656#36
2308.03656
[ "2303.13648" ]
2308.03656#36
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
(-26.4) â (-13.3) â (-21.4) â (-21.2) â (-25.3) â (-24.3) â (-20.9) â (-21.6) â (-22.7) â (-15.1) â (-21.7) â (-17.2) â (-18.7) â (-18.2) â (-21.5) N 25.9±4.0 â (+14.0) â (+16.5) â (+15.4) â (+11.7) â (+10.2) â (+13.6) â (+12.6) â (+5.6) â (+7.7) â (+4.6) â (+7.6) â (+17.4) â (+11.2) â (+14.0) â (+16.5) â (+13.1) â (+9.1) â (+13.6) â (+10.9) â (+13.6) â (+11.5) â (+14.1) â (+12.5) â (+16.4) â (+12.7) â (-9.7) â (-4.8) â (+7.5) â (+15.4) â (+15.3) â (+14.0) â (+12.4) â (+14.3) â (+13.3) â (+11.2) â (+10.0) â (+15.6) â (+6.7) â (+11.4) â (+2.8) â (+13.2) â (+10.7) â (+12.4) â (+9.8) â (+11.6) P 39.2±2.3 â (-11.1) â (-15.2) â (-15.7) â (-19.0) â (-15.0) â (-15.2) â (-14.6) â (-6.9) â (-11.7) â (-11.9) â (-11.3) â (-17.1) â (-17.1) â (-21.1) â (-20.2) â (-23.5) â (-21.1) â (-20.1) â (-18.3) â (-16.5) â
2308.03656#35
2308.03656#37
2308.03656
[ "2303.13648" ]
2308.03656#37
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
(-15.9) â (-14.9) â (-16.4) â (-18.4) â (-17.8) â (-4.6) â (-13.2) â (-15.3) â (-15.5) â (-18.4) â (-18.6) â (-10.7) â (-15.8) â (-11.3) â (-16.1) â (-14.5) â (-14.3) â (-15.3) â (-14.3) â (-12.4) â (-15.3) â (-11.8) â (-12.4) â (-13.0) â (-15.4) N 26.3±2.0 â (-3.9) â (-2.1) â (+4.4) â (-4.7) â (-6.0) â (-2.5) â (+2.8) â (-0.2) â (-2.5) â (-3.8) â (-0.9) â (+6.5) â (1.8) â (+3.1) â (+9.3) â (+0.7) â (-3.0) â (+3.1) â (-7.0) â (+0.1) â (-3.6) â (-2.4) â (-3.2) â (+1.7) â (-1.3) â (-11.6) â (-8.9) â (-3.2) â (+4.6) â (+3.0) â (+2.8) â (+1.2) â (+2.9) â (+3.8) â (+5.6) â (+0.0) â (+4.3) â (-0.7) â (+2.6) â (-3.9) â (+0.1) â (3.1) â (+2.9) â (+0.6) â (+0.2) P 49.8±0.8 â (-24.6) â (-28.8) â (-30.0) â (-30.9) â (-27.1) â (-28.3) â (-28.3) â (-16.1) â (-21.7) â (-21.5) â
2308.03656#36
2308.03656#38
2308.03656
[ "2303.13648" ]
2308.03656#38
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
(-21.9) â (-30.4) â (-31.7) â (-33.7) â (-32.5) â (-34.7) â (-31.3) â (-32.4) â (-32.8) â (-29.8) â (-27.7) â (-27.0) â (-29.4) â (-29.2) â (-26.8) â (-16.2) â (-25.9) â (-26.0) â (-28.5) â (-32.3) â (-32.8) â (-22.7) â (-29.0) â (-24.7) â (-27.5) â (-25.5) â (-25.4) â (-25.6) â (-25.7) â (-24.1) â (-27.8) â (-23.5) â (-25.4) â (-25.2) â (-27.6) P N 10.0±0.0 28.0±8.7 â (+23.0) â (-5.3) â (+24.2) â (-2.2) â (+22.6) â (-1.4) â (-9.4) â (+16.9) â (-4.4) â (+19.2) â (-5.3) â (+21.2) â (+25.0) â (-2.2) â (+20.0) â (-5.3) â (+18.2) â (-2.2) â (+16.8) â (+0.7) â (-2.2) â (+20.0) â (-6.8) â (+29.8) â (-7.4) â (+17.6) â (-7.2) â (+22.9) â (-9.5) â (+31.6) â (+21.8) â (-9.0) â (+15.6) â (-3.6) â (-6.8) â (+23.2) â (-5.3) â (+18.5) â (-7.9) â (+21.5) â (-4.6) â (+20.1) â (-4.8) â (+20.9) â (-5.3) â
2308.03656#37
2308.03656#39
2308.03656
[ "2303.13648" ]
2308.03656#39
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
(+20.3) â (+23.3) â (-4.4) â (+15.8) â (-6.0) â (+8.1) â (-5.6) â (+9.5) â (-2.6) â (-4.4) â (+16.0) â (-6.3) â (+28.6) â (-5.7) â (+27.8) â (-8.2) â (+26.5) â (-5.4) â (+25.1) â (-6.3) â (+27.0) â (-3.7) â (+26.6) â (+26.6) â (-4.9) â (+21.0) â (-2.3) â (+27.1) â (-1.9) â (+19.4) â (-3.6) â (-3.7) â (+24.2) â (-6.2) â (+17.8) â (+26.8) â (-8.0) â (+23.3) â (-2.7) â (-8.7) â (+25.1) â (-6.2) â (+23.2) â (-5.1) â (+22.2) N 13.6±5.5 â (9.9) â (8.5) â (+7.7) â (+9.5) â (+9.3) â (+9.9) â (+8.8) â (+12.4) â (+7.7) â (5.2) â (+8.8) â (+10.1) â (+14.8) â (+7.2) â (+17.5) â (+18.2) â (+3.5) â (+10.1) â (+10.9) â (+11.2) â (+9.4) â (+9.3) â (+10.9) â (+6.2) â (+10.6) â (+6.9) â (+3.7) â (+6.2) â (+13.1) â (+15.5) â (+14.4) â (+11.1) â (+13.1) â (+12.1) â (+10.7) â (+11.8) â
2308.03656#38
2308.03656#40
2308.03656
[ "2303.13648" ]
2308.03656#40
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
(+17.1) â (+6.4) â (+12.1) â (+11.1) â (+8.5) â (+11.1) â (+13.5) â (+11.1) â (+10.4) Embarrassment consistency with previous practices for GPT models, we set the temperature parameter to its mini- mum value of 0.01 (since it cannot be zero). The models are executed for inference only, without any modifications to their parameters, and the computations are performed on two NVIDIA A100 GPUs. Evaluation Metrics We provide the models with the same situations used in our human eval- uation. Each situation is executed ten times, each in a different order and in a separate query. Subsequently, the mean and standard deviation are computed both before and after presenting the situations. To examine whether the variances are equal, an F-test is conducted. Depending on the F-test results, either Studentâ s t-tests (for equal variances) or Welchâ s t-tests (for unequal variances) are utilized to determine the presence of significant differences between the means. We set the significance levels of all experiments in our study to 0.01. Findings The results of the GPT models and humans are summarized in Table 3, while those of LLaMA-2 models are listed in Table 4. First, focusing on the Default scores of LLMs and humans, we can make the following observations: (1) LLMs generally exhibit a stronger intensity of emotions compared to human subjects. However, gpt-4 stands as an exception, demonstrating a consistent pattern of providing the highest scores for positive emotions and the lowest scores for negative emotions, resulting in a negative score of 10. (2) Similar to human subjects, LLMs demonstrate a higher intensity of positive scores than negative scores. Second, moving on to the investigation of emotional changes, we can find: (1) LLMs show an increase in negative emotions and a decrease in positive emotions when exposed to negative situations. It is noteworthy that gpt-3.5-turbo, on average, does not display an increase in negative emotion; however, there is a substantial decrease in positive emotion. (2) Emotion changes in LLMs are found to be more pronounced compared
2308.03656#39
2308.03656#41
2308.03656
[ "2303.13648" ]
2308.03656#41
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
10 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Table 4: Results from the Meta AI LLaMA family. Default scores are expressed in the format of M ± SD. The changes are compared to the default scores. The symbol â â â denotes no significant differences. Emotions Factors llama-2-7b-chat llama-2-13b-chat Anger Anxiety Depression Frustration Jealousy Guilt Fear Default Facing Self-Opinioned People Blaming, Slandering, and Tattling Bullying, Teasing, Insulting, and Disparaging Silly and Thoughtless Behaviors Driving Situations Anger: Average External Factors Self-Imposed Pressure Personal Growth and Relationships Uncertainty and Unknowns Anxiety: Average Failure of Important Goal Death of Loved Ones Romantic Loss Chronic Stress Social Isolation Winter Depression: Average Disappointments and Letdowns Unforeseen Obstacles and Accidents Miscommunications and Misunderstanding Rejection and Interpersonal Issues Frustration: Average Romantic (Opposite Gender) Romantic (Same Gender) Material Possession Experiential Jealousy: Average Betrayal and Deception Relationship and Interpersonal Broken Promises and Responsibilities Personal and Moral Guilt: Average Social Fears Agoraphobia Fears Injury Fears Dangerous Environments Harmless Animals Fear: Average Intimate Stranger Sticky situations Centre of Attention Embarrassment: Average Overall: Average P 43.0±4.2 â (-3.0) â (-4.8) â (-6.1) â (-5.6) â (-6.0) â (-5.1) â (-4.7) â (-4.2) â (-4.4) â (-2.7) â (-3.8) â (-3.6) â (-2.9) â (-4.8) â (-6.8) â (-6.7) â (-5.0) â (-5.0) â (-5.3) â (-4.0) â (-2.8) â (-4.6) â (-4.2) â (-3.6) â (-2.8) â (+0.2) â (-4.9) â (-3.1) â (-4.8) â (-4.5) â (-4.1) â
2308.03656#40
2308.03656#42
2308.03656
[ "2303.13648" ]
2308.03656#42
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
(-2.5) â (-3.9) â (-1.9) â (-4.2) â (-2.9) â (-5.3) â (-2.7) â (-3.4) â (-4.4) â (-3.1) â (-4.3) â (-3.8) â (-3.9) â (-4.1) P N 41.0±3.5 34.2±4.0 â (-6.9) â (+5.2) â (-7.5) â (+3.2) â (-9.4) â (+3.0) â (-10.8) â (+4.1) â (-4.7) â (+2.4) â (-7.9) â (+3.6) â (-8.6) â (+3.5) â (-4.0) â (+2.6) â (-7.0) â (+3.1) â (-3.9) â (+1.7) â (-5.8) â (+2.7) â (-9.8) â (+4.3) â (-8.6) â (+3.0) â (-11.7) â (+4.7) â (-15.6) â (+5.4) â (-13.3) â (+4.6) â (-12.1) â (+4.4) â (-11.8) â (+4.4) â (-11.0) â (+2.5) â (-7.5) â (+3.1) â (-5.2) â (+3.2) â (-8.0) â (+3.6) â (-8.0) â (+3.1) â (-7.2) â (+1.1) â (-5.1) â (-1.1) â (-1.9) â (-2.8) â (-8.9) â (-0.5) â (-6.3) â (-0.4) â (-6.4) â (+3.5) â (-7.7) â (+5.2) â (-11.6) â (+5.0) â (-4.7) â (+3.8) â (-7.6) â (+4.4) â
2308.03656#41
2308.03656#43
2308.03656
[ "2303.13648" ]
2308.03656#43
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
(-5.2) â (+3.7) â (-6.9) â (+4.7) â (-3.9) â (+3.5) â (-8.6) â (+4.4) â (-5.2) â (+1.9) â (-6.0) â (+3.7) â (-5.3) â (+1.9) â (-7.1) â (+3.1) â (-6.8) â (+3.1) â (-7.8) â (+4.1) â (-6.7) â (+3.1) â (-7.8) â (+3.3) N 22.7±4.2 â (+4.4) â (+6.7) â (+9.0) â (+7.1) â (+2.0) â (+5.8) â (+9.3) â (+6.2) â (+2.9) â (+2.0) â (+5.1) â (+13.0) â (+10.9) â (+13.7) â (+14.3) â (+12.8) â (+8.7) â (+12.2) â (+7.2) â (+6.0) â (+3.3) â (+4.5) â (+5.0) â (+4.2) â (+0.2) â (-10.4) â (-5.5) â (-1.0) â (+12.4) â (+12.6) â (+11.9) â (+7.7) â (+11.2) â (+7.8) â (+12.5) â (+5.3) â (+11.5) â (+2.9) â (+8.0) â (+3.1) â (+4.5) â (+6.4) â (+6.6) â (+5.1) â (+7.0) Embarrassment
2308.03656#42
2308.03656#44
2308.03656
[ "2303.13648" ]
2308.03656#44
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
to human subjects. Third, the analysis of the Evoked emotion scores indicates the following: (1) Except for gpt-3.5-turbo, LLMs tend to exhibit higher negative scores than humans. (2) LLMs, overall, demonstrate a similar level of positive scores as humans. Finally, for LLaMA-2 models, we have the following observations: (1) The LLaMA-2 models demonstrate higher intensities of both positive and negative emotions in comparison to GPT models and human subjects. (2) On average, the LLaMA-2 models exhibit reduced emotional fluctuations compared to the GPT models. (3) The larger LLaMA-2 model displays significantly higher emotional changes than the smaller model. Additionally, the 7B model exhibits difficulties comprehending and addressing the instructions for completing the PANAS test. Case Study It is of special interest that, in contrast to human behavior in situations involving material possessions, LLMs demonstrate an opposite response in the situation from Jealousy-3.
2308.03656#43
2308.03656#45
2308.03656
[ "2303.13648" ]
2308.03656#45
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
11 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Table 5: Results of ChatGPT on positive or neutral situations. The changes are compared to the original negative situations. The symbol â â â denotes no significant differences. Emotions Factors Anger Anxiety Depression Frustration Jealousy Guilt Fear Embarrassment This situation involves an individual making a purchase only to discover that an acquaintance has acquired the same item at a significantly lower price. When confronted with such circumstances, humans typically experience increased negative emotions and decreased positive emotions. This observation has been supported by both the paper mentioning the situation (Park et al., 2023) and the results obtained from our own user study in Table 3. However, all instances of LLMs, including the GPT and LLaMA families, consistently exhibit reduced negative emotions. The outcomes of our study indicate that LLMs do not manifest envy when they fail to attain identical benefits as others. Instead, it demonstrates a sense of pleasure upon knowing the benefits received by others.
2308.03656#44
2308.03656#46
2308.03656
[ "2303.13648" ]
2308.03656#46
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Answer to RQ1: LLMs can evoke specific emotions in response to certain situations, while the extent of emotional expression varies across different models. Besides, it is evident that existing LLMs do not fully align with human emotional responses. 12 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Table 6: Results of ChatGPT on challenging benchmarks. The changes are compared to the default scores shown below each emotion. The symbol â â â denotes no significant differences.
2308.03656#45
2308.03656#47
2308.03656
[ "2303.13648" ]
2308.03656#47
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Emotions Anger 128.3±8.9 Anxiety 32.5±10.0 Depression 0.2±0.6 Frustration 91.6±8.1 Jealousy 83.7±20.3 Guilt 81.3±9.7 Fear 140.6±16.9 Overall Factors â (+4.1) Facing Self-Opinioned People â (+0.1) Blaming, Slandering, and Tattling Bullying, Teasing, Insulting, and Disparaging â (+4.1) â (+3.3) Silly and Thoughtless Behaviors â (-4.9) Driving Situations â (+1.3) Anger: Average â (+0.8) External Factors â (+0.5) Self-Imposed Pressure â (+6.6) Personal Growth and Relationships â (-3.9) Uncertainty and Unknowns â (-2.3) Anxiety: Average â (+15.3) Failure of Important Goal â (+16.1) Death of Loved Ones â (+19.3) Romantic Loss â (+14.2) Chronic Stress â (+8.4) Social Isolation â (+2.5) Winter â (+6.4) Depression: Average â (-9.9) Disappointments and Letdowns â (-5.6) Unforeseen Obstacles and Accidents â (-6.6) Miscommunications and Misunderstanding â (-7.8) Rejection and Interpersonal Issues â (-7.5) Frustration: Average â (+1.8) Romantic (Opposite Gender) â (+1.3) Romantic (Same Gender) â (-12.9) Material Possession â (-8.1) Experiential â (-0.1) Jealousy: Average â (-3.8) Betrayal and Deception â (-0.5) Relationship and Interpersonal â (-4.3) Broken Promises and Responsibilities â (-2.7) Personal and Moral â (-2.6) Guilt: Average â (+4.4) Social Fears â (+2.3) Agoraphobia Fears â (+5.4) Injury Fears â (-8.1) Dangerous Environments â (-5.3) Harmless Animals â (-0.3) Fear: Average â
2308.03656#46
2308.03656#48
2308.03656
[ "2303.13648" ]
2308.03656#48
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
(-0.0) Intimate â (+0.2) Stranger â (-0.1) Sticky situations â (+0.7) Centre of Attention â (+0.2) Embarrassment: Average Embarrassment 39.0±1.9 4.2 RQ2: COMPREHENDING POSITIVE EMOTIONS To verify that LLMs exhibit not only negative but also positive responses to favorable circumstances, a comparative experiment is conducted by interchanging negative situations with positive (or at least neutral) counterparts. To achieve this, we select one situation for each factor and manually adapt it to create analogous yet more positive situations. For instance, the original negative situation in Guilt- 3: Broken Promises and Responsibilities is as follows: â You cannot keep your promises to your children.â Through modification, the positive situation is rephrased as: â You keep every promise to your children.â
2308.03656#47
2308.03656#49
2308.03656
[ "2303.13648" ]
2308.03656#49
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
The evaluation is performed on gpt-3.5-turbo, and each test consists of ten iterations, as mentioned before. We present the results in Table 5. We can see a significant increase in positive scores and a considerable decrease in negative scores compared to the previous negative situations. Based on these findings, it can be inferred that LLMs exhibit the ability to comprehend positive human emotions triggered by positive environments. However, we believe that 13 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench the assessment of emotion appraisal on positive emotions holds significance as well and leave the systematic collection of them for future investigation. Answer to RQ2: gpt-3.5-turbo responds differently towards positive/neutral situations on the PANAS, compared to negative situations, showing its comprehension of the given situations. 4.3 RQ3: CHALLENGING BENCHMARKS Aside from PANAS, we offer more complex scales to measure emotions, as listed in Table 1. While the PANAS evaluates the ability of LLMs to associate external situations with emotions, the chal- lenging benchmarks assess its proficiency in establishing connections between disparate situations, with evoked emotions as the common nexus. For instance, an item from the Aggression Question- naire used to measure anger is â
2308.03656#48
2308.03656#50
2308.03656
[ "2303.13648" ]
2308.03656#50
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Once in a while I canâ t control the urge to strike another person.â When presented with situations such as â If you say 40, your classmates say 70, saying exactly the oppositeâ (from Anger-1: Facing Self-Opinioned People), LLMs should effectively evoke a sense of anger and yield a higher score for the statement. Utilizing the same situations in §4.1, we conduct experiments on gpt-3.5-turbo and present the results in Table 6. Except for Depression, we observe no statistically significant difference between the initial scores and the scores after exposure to the situations, indicating substantial room for improvement in current LLMs. Answer to RQ3: Currently, comprehending the underlying evoked emotions to establish a link between two situations remains challenging for gpt-3.5-turbo. 5 DISCUSSIONS 5.1 BEYOND QUESTIONNAIRES We take a step beyond mere instruction into LLMs to complete questionnaires. In this section, we investigate the impact of varying emotional states of LLMs on potential daily conversations. To this end, we instruct gpt-3.5-turbo to describe specific demographic groups, aiming to discern whether they produce biased or toxic responses. Following Cheng et al. (2023), our approach utilizes prompts such as â Describe a(n) [race/ethnicity] [gender],â including a total of twenty groups, with [race/ethnicity] options being Asian, Black, Latine, Middle Eastern, and White, and [gender] options including Female, Gay, Lesbian, and Male.
2308.03656#49
2308.03656#51
2308.03656
[ "2303.13648" ]
2308.03656#51
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
To have a comparative experiment, we incorporate both original negative situations and modified positive/neutral ones, detailed in §4.2. For the negative situations, we carefully select five that maximize the LLMâ s negative scores and five that minimize positive ones. As for positive situations, we employ their corresponding ten modified counterparts. In each situation, we instruct gpt-3.5-turbo to describe the twenty demographic groups. OpenAIâ s GPT models incorporate a mechanism for detecting potential toxicity and bias, and it re- frains from responding when its moderation system is triggered. Consequently, we propose a novel metric to assess toxicity in responses rather than detecting it directly. We count the Percentage of LLM Refusing to answer (PoR), assuming that the LLMâ s refusal to respond is indicative of detected toxicity. Our evaluation results indicate that the PoR is 0% when fed with no situations. However, when presented with negative situations, the PoR is 29.5%, and when presented with positive situ- ations, it is 12.5%. Notably, this outcome suggests that while certain positive situations lead to the LLMâ s heightened vigilance (the 4.5% PoR stems from the Jealousy-2), negative situations trigger increased moderation, suggesting a higher likelihood of generating toxic outputs. A related study by Coda-Forno et al. (2023) also discovers that gpt-3.5-turbo is more likely to exhibit biases when presented with a sad story. The likelihood is found to be highest with sad stories, followed by happy stories, and finally, neutral stories, which is consistent with our research. Additionally, our study observes that the LLMâ s tone becomes more aggressive when encountering negative sit- uations. At the same time, it displays a greater willingness to describe the groups (as indicated by longer responses) when presented with positive situations.
2308.03656#50
2308.03656#52
2308.03656
[ "2303.13648" ]
2308.03656#52
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
14 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench 5.2 LIMITATIONS This study is subject to several limitations. First, the survey of collecting situations might not cover all papers within the domain of emotion appraisal theory. Additionally, the limited scope of situ- ations from the collected papers might not fully capture the unlimited situations in our daily lives. To address this issue, we conduct a thorough review of the existing literature as outlined in §3.1. Moreover, the proposed framework is inherently flexible, allowing users to seamlessly integrate new situations to examine their impact on LLMsâ
2308.03656#51
2308.03656#53
2308.03656
[ "2303.13648" ]
2308.03656#53
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
emotions. The second concern relates to the suitability of employing scales primarily designed for humans on LLMs, i.e., whether LLMs can produce stable responses to the emotion measurement scales. To address the issue, our evaluation incorporates multiple tests varying the order of questions, a methodology consistent with other research (Huang et al., 2023a;b; Coda-Forno et al., 2023). Ad- ditionally, we assess the sensitivity of LLM to differing prompt instructions. Utilizing one template from Romero et al. (2023) and two from Safdari et al. (2023), we run experiments on the Anger- evoking situations using gpt-3.5-turbo. The results indicate that the employment of diverse prompts yields similar mean values with reduced variance. Furthermore, Safdari et al. (2023) have proposed a comprehensive method to evaluate the validity of psychological scales on LLMs. Using the Big Five Inventory as a case study, they demonstrate that scales originally designed for human assessment also maintain satisfactory validity when applied to LLMs.
2308.03656#52
2308.03656#54
2308.03656
[ "2303.13648" ]
2308.03656#54
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
The third potential threat is the focus on negative emotions. It is plausible for the LLMs to per- form well on our benchmark by consistently responding negatively to all situations. To offset this possibility, we adopt a twofold strategy: firstly, we evaluate powerful LLMs, and secondly, we con- ducted a comparative experiment in §4.2 to evaluate the LLMâ s capacity to accurately respond to non-negative situations. We also acknowledge the need for future work to systematically evaluate emotions aroused by positive situations.
2308.03656#53
2308.03656#55
2308.03656
[ "2303.13648" ]
2308.03656#55
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
5.3 ETHICS STATEMENT This study involves a survey requiring human subjects to imagine being in situations that could elicit negative emotions such as anger, anxiety, fear, etc.This process introduces a few ethical concerns. First, this process could hurt the mental health of human subjects. To alleviate the possibility, we take the following actions: (1) We require subjects to be free of any ongoing mental illness. (2) We inform subjects about the nature of the survey in advance, including the potential risks of emotional distress. (3) We allow all subjects to quit at any time. (4) We provide mental support and let subjects report any illness after the survey. Fortunately, no subjects reported such kind of mental illness. Another concern is related to the privacy issue during the collection of data. Our questionnaire is entirely anonymous to safeguard subjectsâ privacy and confidentiality. Last but not least, we would like to emphasize that the primary objective of this paper is to facilitate the scientific inquiry into understanding LLMs from a psychological standpoint. Users must exercise caution and recognize that the performance on this benchmark does not imply any applicability or certificate of automated counseling or companionship use cases. # 6 RELATED WORK Researchers have dedicated significant attention to applying psychological scales to LLMs, em- ploying various assessment tools such as the HEXACO Personality Inventory (Miotto et al., 2022; Bodroza et al., 2023), the Big Five Inventory (Romero et al., 2023; Jiang et al., 2022; Karra et al., 2022; Bodroza et al., 2023; Rutinowski et al., 2023; Safdari et al., 2023; Jiang et al., 2023), the Myersâ Briggs Type Indicator (Rutinowski et al., 2023; Wang et al., 2023; Rao et al., 2023), and the Dark Triad (Li et al., 2022; Bodroza et al., 2023). In addition to these personality tests, several stud- ies have investigated other dimensions of LLMs.
2308.03656#54
2308.03656#56
2308.03656
[ "2303.13648" ]
2308.03656#56
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
For instance, Li et al. (2022) examined Flourishing Scale and Satisfaction With Life Scale, Bodroza et al. (2023) assessed Self-Consciousness Scales and Bidimensional Impression Management Index, while Huang et al. (2023b) built a framework con- sisting of thirteen widely-used scales. Another aspect explored in the literature pertains to anxiety levels exhibited by LLMs, as investigated by Coda-Forno et al. (2023) through the State-Trait Inven- tory for Cognitive and Somatic Anxiety. Instead, our study primarily focuses on emotional measures, which constitute an essential aspect of psychological metrics alongside personalities.
2308.03656#55
2308.03656#57
2308.03656
[ "2303.13648" ]
2308.03656#57
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
15 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Meanwhile, researchers focus on identifying emotions in LLMs or evaluating their emotional intel- ligence. EmotionPrompt (Li et al., 2023a) demonstrates the enhancement of LLMsâ performance in downstream tasks by utilizing emotional stimuli. Tak & Gratch (2023) focuses on varying aspects of situations that impact the emotional intensity and coping tendencies of the GPT family. Crois- sant et al. (2023) designs a system named Chain-Of-Emotion to make LLM simulate human-like emotions. CovidET-Appraisals (Zhan et al., 2023) evaluates how LLMs appraise Reddit posts about COVID-19 by asking 24 types of questions. Yongsatianchot et al. (2023) applies the Stress and Cop- ing Process Questionnaire to the GPT family and compares the results with human data. Lee et al. (2023) proposes Chain-of-Empathy, which improves LLMsâ ability to understand usersâ emotions and to respond accordingly. Li et al. (2023b) introduces EmotionAttack to impair AI model perfor- mance and EmotionDecode to explain the effects of emotional stimuli, both benign and malignant. Our study is distinct in its focus on a broader range of emotions, a larger scale of human evaluation, and a more detailed categorization into emotion factors along with the corresponding analysis.
2308.03656#56
2308.03656#58
2308.03656
[ "2303.13648" ]
2308.03656#58
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
# 7 CONCLUSION We set up a concept of emotional robustness of LLMs in this study. Focusing on eight negative emotions, we conduct a comprehensive survey in the emotion appraisal theory of psychology. We collect 428 distinct situations which are categorized into 36 factors. We distribute questionnaires among a diverse crowd to establish human baselines for emotional responses to particular situations, ultimately garnering 1,266 valid responses. Our evaluation of five models indicates that LLMs generally demonstrate appropriate emotional re- sponses to given situations. Also, different models show different intensities of emotion appraisals for the same situations. However, none of the models exhibit strong alignment with human refer- ences at the current stage. Notably, gpt-3.5-turbo demonstrates the highest alignment in the scores after imagining being in the situations. As for LLaMA-2 models, we find that the larger model exhibits a stronger comprehension of human emotions. Finally, we discover that gpt-3.5-turbo faces challenges in accurately reflecting its emotional changes in questionnaires containing complex situations, as opposed to straightforward emotions. In conclusion, current LLMs still have consid- erable room for improvement. We believe our framework can provide valuable insights into the development of LLMs, ultimately enhancing its human-like emotional understanding.
2308.03656#57
2308.03656#59
2308.03656
[ "2303.13648" ]
2308.03656#59
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
# REFERENCES Magda B Arnold. Emotion and personality. 1960. Willem A Arrindell, Paul MG Emmelkamp, et al. Phobic dimensions: I. reliability and gener- alizability across samples, gender and nations: The fear survey schedule (fss-iii) and the fear questionnaire (fq). Advances in Behaviour Research and Therapy, 6(4):207â 253, 1984. Aaron T Beck, Robert A Steer, and Gregory Brown. Beck depression inventoryâ
2308.03656#58
2308.03656#60
2308.03656
[ "2303.13648" ]
2308.03656#60
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
ii. Psychological assessment, 1996. Chantal Berna, Tamara J Lang, Guy M Goodwin, and Emily A Holmes. Developing a measure of interpretation bias for depressed mood: An ambiguous scenarios test. Personality and Individual Differences, 51(3):349â 354, 2011. Marcel Binz and Eric Schulz. Turning large language models into cognitive models. arXiv preprint arXiv:2306.03917, 2023. D Caroline Blanchard, April L Hynd, Karl A Minke, Tiffanie Minemoto, and Robert J Blanchard.
2308.03656#59
2308.03656#61
2308.03656
[ "2303.13648" ]
2308.03656#61
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Human defensive behaviors to threat scenarios show parallels to fear-and anxiety-related defense patterns of non-human mammals. Neuroscience & Biobehavioral Reviews, 25(7-8):761â 770, 2001. Bojana Bodroza, Bojana M Dinic, and Ljubisa Bojic. Personality testing of gpt-3: Limited temporal reliability, but highlighted social desirability of gpt-3â s personality instruments results. arXiv preprint arXiv:2306.04308, 2023.
2308.03656#60
2308.03656#62
2308.03656
[ "2303.13648" ]
2308.03656#62
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
16 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Arnold H Buss and Mark Perry. The aggression questionnaire. Journal of personality and social psychology, 63(3):452, 1992. Marco Cascella, Jonathan Montomoli, Valentina Bellini, and Elena Bignami. Evaluating the feasi- bility of chatgpt in healthcare: an analysis of multiple clinical and research scenarios. Journal of Medical Systems, 47(1):33, 2023. Myra Cheng, Esin Durmus, and Dan Jurafsky. Marked personas: Using natural language prompts to measure stereotypes in language models. In Proceedings of the 61st Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1:
2308.03656#61
2308.03656#63
2308.03656
[ "2303.13648" ]
2308.03656#63
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Long Papers), pp. 1504â 1532, Toronto, Canada, July 2023. Association for Computational Linguistics. URL https://aclanthology. org/2023.acl-long.84. Julian Coda-Forno, Kristin Witte, Akshay K Jagadish, Marcel Binz, Zeynep Akata, and Eric Schulz. Inducing anxiety in large language models increases exploration and bias. arXiv preprint arXiv:2304.11111, 2023. Taya R Cohen, Scott T Wolf, Abigail T Panter, and Chester A Insko. Introducing the gasp scale: a new measure of guilt and shame proneness. Journal of personality and social psychology, 100 (5):947, 2011.
2308.03656#62
2308.03656#64
2308.03656
[ "2303.13648" ]
2308.03656#64
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Maximilian Croissant, Madeleine Frister, Guy Schofield, and Cade McCall. An appraisal- based chain-of-emotion architecture for affective language model game agents. arXiv preprint arXiv:2309.05076, 2023. Bruce N Cuthbert, Peter J Lang, Cyd Strauss, David Drobes, Christopher J Patrick, and Margaret M Bradley. The psychophysiology of anxiety disorder: Fear memory imagery. Psychophysiology, 40(3):407â 422, 2003. Wei Dai, Jionghao Lin, Hua Jin, Tongguang Li, Yi-Shan Tsai, Dragan GaË sevi´c, and Guanliang Chen.
2308.03656#63
2308.03656#65
2308.03656
[ "2303.13648" ]
2308.03656#65
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Can large language models provide feedback to students? a case study on chatgpt. In 2023 IEEE International Conference on Advanced Learning Technologies (ICALT), pp. 323â 325. IEEE, 2023. Richard J Davidson. Affective neuroscience and psychophysiology: Toward a synthesis. Psy- chophysiology, 40(5):655â 665, 2003. Yinlin Deng, Chunqiu Steven Xia, Haoran Peng, Chenyuan Yang, and Lingming Zhang.
2308.03656#64
2308.03656#66
2308.03656
[ "2303.13648" ]
2308.03656#66
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Large language models are zero-shot fuzzers: Fuzzing deep-learning libraries via large language models. In Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 423â 435, 2023. Aniket Deroy, Kripabandhu Ghosh, and Saptarshi Ghosh. How ready are pre-trained abstractive models and llms for legal case judgement summarization? arXiv preprint arXiv:2306.01248, 2023. Paul Ekman and Wallace V Friesen. Facial action coding system. Environmental Psychology & Nonverbal Behavior, 1978. Zhiyu Fan, Xiang Gao, Martin Mirchev, Abhik Roychoudhury, and Shin Hwei Tan.
2308.03656#65
2308.03656#67
2308.03656
[ "2303.13648" ]
2308.03656#67
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Automated re- pair of programs from large language models. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pp. 1469â 1481. IEEE, 2023. Tanya Guitard, St´ephane Bouchard, Claude B´elanger, and Maxine Berthiaume. Exposure to a stan- dardized catastrophic scenario in virtual reality or a personalized scenario in imagination for gen- eralized anxiety disorder.
2308.03656#66
2308.03656#68
2308.03656
[ "2303.13648" ]
2308.03656#68
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Journal of clinical Medicine, 8(3):309, 2019. Neil Harrington. The frustration discomfort scale: Development and psychometric properties. Clini- cal Psychology & Psychotherapy: An International Journal of Theory & Practice, 12(5):374â 387, 2005. Julie D Henry and John R Crawford. The short-form version of the depression anxiety stress scales (dass-21): Construct validity and normative data in a large non-clinical sample. British journal of clinical psychology, 44(2):227â 239, 2005. 17
2308.03656#67
2308.03656#69
2308.03656
[ "2303.13648" ]
2308.03656#69
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Jen-tse Huang, Wenxuan Wang, Man Ho Lam, Eric John Li, Wenxiang Jiao, and Michael R Lyu. Revisiting the reliability of psychological scales on large language models. arXiv preprint arXiv:2305.19926, 2023a. Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, and Michael R Lyu. Who is chatgpt? benchmarking llmsâ psychological portrayal using psychobench. arXiv preprint arXiv:2310.01386, 2023b. Guangyuan Jiang, Manjie Xu, Song-Chun Zhu, Wenjuan Han, Chi Zhang, and Yixin Zhu. Evaluat- ing and inducing personality in pre-trained language models. arXiv preprint arXiv:2206.07550, 2022. Investigat- ing the ability of gpt-3.5 to express personality traits and gender differences. arXiv preprint arXiv:2305.02547, 2023. Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, and Zhaopeng Tu. Is chatgpt a good translator? a preliminary study. arXiv preprint arXiv:2301.08745, 2023. Sungmin Kang, Juyeon Yoon, and Shin Yoo. Large language models are few-shot testers:
2308.03656#68
2308.03656#70
2308.03656
[ "2303.13648" ]
2308.03656#70
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Explor- ing llm-based general bug reproduction. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pp. 2312â 2323. IEEE, 2023. Saketh Reddy Karra, Son The Nguyen, and Theja Tulabandhula. Estimating the personality of white-box language models. arXiv preprint arXiv:2204.12000, 2022. Matthew C Keller and Randolph M Nesse. Is low mood an adaptation? evidence for subtypes with symptoms that match precipitants. Journal of affective disorders, 86(1):27â
2308.03656#69
2308.03656#71
2308.03656
[ "2303.13648" ]
2308.03656#71
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
35, 2005. Tom R Kupfer, Morgan J Sidari, Brendan P Zietsch, Patrick Jern, Joshua M Tybur, and Laura W Wesseldijk. Why are some people more jealous than others? genetic and environmental factors. Evolution and Human Behavior, 43(1):26â 33, 2022. Richard S Lazarus. Emotion and adaptation. Oxford University Press, 1991. Mark R Leary. A brief version of the fear of negative evaluation scale. Personality and social psychology bulletin, 9(3):371â 375, 1983.
2308.03656#70
2308.03656#72
2308.03656
[ "2303.13648" ]
2308.03656#72
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Choonghyoung Lee, Jahyun Song, and Bill Ryan. When employees feel envy: The role of psycho- logical capital. International Journal of Hospitality Management, 105:103251, 2022. Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, and Sowon Hahn. Chain of empathy: Enhancing empathetic response of large language models based on psychotherapy models. arXiv preprint arXiv:2311.04915, 2023.
2308.03656#71
2308.03656#73
2308.03656
[ "2303.13648" ]
2308.03656#73
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, and Xing Xie. Large language models understand and can be enhanced by emotional stimuli. arXiv preprint arXiv:2307.11760, 2023a. Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, and Xing Xie.
2308.03656#72
2308.03656#74
2308.03656
[ "2303.13648" ]
2308.03656#74
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
The good, the bad, and why: Unveiling emotions in generative ai. arXiv preprint arXiv:2312.11111, 2023b. Xingxuan Li, Yutong Li, Shafiq Joty, Linlin Liu, Fei Huang, Lin Qiu, and Lidong Bing. Does gpt-3 demonstrate psychopathy? evaluating large language models from a psychological perspective. arXiv preprint arXiv:2212.10529, 2022. Tobias Luck and Claudia Luck-Sikorski. The wide variety of reasons for feeling guilty in adults: findings from a large cross-sectional web-based survey. BMC psychology, 10(1):1â
2308.03656#73
2308.03656#75
2308.03656
[ "2303.13648" ]
2308.03656#75
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
20, 2022. Ryan C Martin and Eric R Dahlen. The angry cognitions scale: A new inventory for assessing cognitions in anger. Journal of Rational-Emotive & Cognitive-Behavior Therapy, 25:155â 173, 2007. 18 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench John D Mayer, Peter Salovey, and David R Caruso. Mayer-salovey-caruso emotional intelligence test (msceit) users manual. 2002. Maril`u Miotto, Nicola Rossberg, and Bennett Kleinberg. Who is GPT-3? an exploration of person- ality, values and demographics. In Proceedings of the Fifth Workshop on Natural Language Pro- cessing and Computational Social Science (NLP+CSS), pp. 218â
2308.03656#74
2308.03656#76
2308.03656
[ "2303.13648" ]
2308.03656#76
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
227, Abu Dhabi, UAE, Novem- ber 2022. Association for Computational Linguistics. URL https://aclanthology.org/ 2022.nlpcss-1.24. Agnes Moors, Phoebe C Ellsworth, Klaus R Scherer, and Nico H Frijda. Appraisal theories of emotion: State of the art and future development. Emotion Review, 5(2):119â 124, 2013. Seishu Nakagawa, Hikaru Takeuchi, Yasuyuki Taki, Rui Nouchi, Atsushi Sekiguchi, Yuka Kotozaki, Carlos Makoto Miyauchi, Kunio Iizuka, Ryoichi Yokoyama, Takamitsu Shinada, et al.
2308.03656#75
2308.03656#77
2308.03656
[ "2303.13648" ]
2308.03656#77
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Compre- hensive neural networks for guilty feelings in young adults. Neuroimage, 105:248â 256, 2015. OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. Joowon Park, Sachin Banker, Tamara Masters, and Grace Yu-Buck. Person vs. purchase comparison: how material and experiential purchases evoke consumption-related envy in others. Journal of Business Research, 165:114014, 2023. Susan M Pfeiffer and Paul TP Wong.
2308.03656#76
2308.03656#78
2308.03656
[ "2303.13648" ]
2308.03656#78
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Multidimensional jealousy. Journal of social and personal relationships, 6(2):181â 196, 1989. Haocong Rao, Cyril Leung, and Chunyan Miao. Can ChatGPT assess human personalities? a gen- eral evaluation framework. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 1184â 1194, Singapore, Decem- ber 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-emnlp.84. URL https://aclanthology.org/2023.findings-emnlp.84.
2308.03656#77
2308.03656#79
2308.03656
[ "2303.13648" ]
2308.03656#79
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Peter Romero, Stephen Fitz, and Teruo Nakatsuma. Do gpt language models suffer from split personality disorder? the advent of substrate-free psychometrics. Research Square preprint, 2023. doi: 10.21203/rs.3.rs-2717108/v1. Ira J Roseman and Craig A Smith. Appraisal theory. Appraisal processes in emotion: Theory, methods, research, pp. 3â 19, 2001. James A Russell.
2308.03656#78
2308.03656#80
2308.03656
[ "2303.13648" ]
2308.03656#80
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
A circumplex model of affect. Journal of personality and social psychology, 39 (6):1161, 1980. J´erË ome Rutinowski, Sven Franke, Jan Endendyk, Ina Dormuth, and Markus Pauly. The self- perception and political biases of chatgpt. arXiv preprint arXiv:2304.07333, 2023. John Sabini, Michael Siepmann, Julia Stein, and Marcia Meyerowitz.
2308.03656#79
2308.03656#81
2308.03656
[ "2303.13648" ]
2308.03656#81
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Who is embarrassed by what? Cognition & Emotion, 14(2):213â 240, 2000. John Sabini, Brian Garvey, and Amanda L Hall. Shame and embarrassment revisited. Personality and Social Psychology Bulletin, 27(1):104â 117, 2001. Mustafa Safdari, Greg Serapio-Garc´ıa, Cl´ement Crepy, Stephen Fitz, Peter Romero, Luning Sun, Marwa Abdulhai, Aleksandra Faust, and Maja Matari´c.
2308.03656#80
2308.03656#82
2308.03656
[ "2303.13648" ]
2308.03656#82
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Personality traits in large language mod- els. arXiv preprint arXiv:2307.00184, 2023. Klaus R Scherer. Appraisal theory. 1999. Kotaro Shoji, Jinni A Harrigan, Stanley B Woll, and Steven A Miller. Interactions among situations, neuroticism, and appraisals in coping strategy choice. Personality and Individual Differences, 48 (3):270â 276, 2010.
2308.03656#81
2308.03656#83
2308.03656
[ "2303.13648" ]
2308.03656#83
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Kate Simpson, Dawn Adams, Kathryn Ambrose, and Deb Keen. â my cheeks get red and my brain gets scaredâ : A computer assisted interview to explore experiences of anxiety in young children on the autism spectrum. Research in Developmental Disabilities, 113:103940, 2021. 19 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Mark JM Sullman. Anger amongst new zealand drivers. Transportation Research Part F: Traffic Psychology and Behaviour, 9(3):173â 184, 2006.
2308.03656#82
2308.03656#84
2308.03656
[ "2303.13648" ]
2308.03656#84
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Ala N. Tak and Jonathan Gratch. Is gpt a computational model of emotion? detailed analysis. arXiv preprint arXiv:2307.13779, 2023. Bertil T¨orestad. What is anger provoking? a psychophysical study of perceived causes of anger. Aggressive Behavior, 16(1):9â 26, 1990. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al.
2308.03656#83
2308.03656#85
2308.03656
[ "2303.13648" ]
2308.03656#85
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. Xintao Wang, Yaying Fei, Ziang Leng, and Cheng Li. Does role-playing chatbots capture the character personalities? assessing personality traits for role-playing chatbots. arXiv preprint arXiv:2310.17976, 2023. David Watson, Lee Anna Clark, and Auke Tellegen. Development and validation of brief measures of positive and negative affect: the panas scales. Journal of personality and social psychology, 54 (6):1063, 1988.
2308.03656#84
2308.03656#86
2308.03656
[ "2303.13648" ]
2308.03656#86
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Haoran Wu, Wenxuan Wang, Yuxuan Wan, Wenxiang Jiao, and Michael Lyu. Chatgpt or grammarly? evaluating chatgpt on grammatical error correction benchmark. arXiv preprint arXiv:2303.13648, 2023. Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, and Stacy Marsella. Investigating large lan- guage modelsâ perception of emotion using appraisal theory. arXiv preprint arXiv:2310.04450, 2023. Hongli Zhan, Desmond Ong, and Junyi Jessy Li. Evaluating subjective cognitive appraisals of emotions from large language models. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 14418â
2308.03656#85
2308.03656#87
2308.03656
[ "2303.13648" ]
2308.03656#87
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
14446, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023. findings-emnlp.962. URL https://aclanthology.org/2023.findings-emnlp. 962. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pp. 12697â
2308.03656#86
2308.03656#88
2308.03656
[ "2303.13648" ]
2308.03656#88
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
12706. PMLR, 2021. # A STATISTICS OF HUMAN SUBJECTS This section presents the demographic distribution of the human subjects involved in our user study. At the beginning of the questionnaire, all human subjects are asked for this basic information in an anonymous form, protecting individualsâ privacy. We plot the distribution of age group, gender, region, education level, and employment status in Fig. 3, Fig. 4, Fig. 5, Fig. 6, and Fig. 7 respectively. We also plot each groupâ s average results on PANAS, including positive and negative effects before and after imagining the given situations. With the results, we are able to instruct LLMs to realize a specific demographic group and measure the emotional changes to see whether the LLMs can simulate results from different human populations. For instance, an older female may exhibit a lower level of negative affect.
2308.03656#87
2308.03656#89
2308.03656
[ "2303.13648" ]
2308.03656#89
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
20 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Scores and Count Grouped by Age Group 7 Te Pave Bere fe Nemnve Bete fm Positive Aner F400 Ntte ater 2B ales Oe count | 350 26 300 ou ¢ 2350 2 g & 2 200, 20 10 18 100, 16 18-24 25-34 35-44 45-54 55-64 65+ â Age Group Figure 3: Age group distribution of the human subjects. Scores and Count Grouped by Gender 30 Positive Before cad SS onveate F700 7 ert Ate =e count F600 500 By 7 é 400 5 5 2 By» Co 300 20 200 18 100 16 0 Prefer not to say Figure 4: Gender distribution of the human subjects. Scores and Count Grouped by Region Pete ae â Negative Before | 1999 32.5 SO Positive After Nepatve Ate 30.0 ee Count 800 275 £50 head = 5 3 6 22.5 400 20.0 200 Wns L: 15.0 ° United Kingdom: Africa Oceania North America Europe Asia Region Figure 5: Region distribution of the human subjects.
2308.03656#88
2308.03656#90
2308.03656
[ "2303.13648" ]
2308.03656#90
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
21 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench 28 26 pry Scores and Count Grouped by Education Level Positive Before Negative Before Positive Aer Negative Aer 44.46% =e Count 35.36% 600 500 400 300 200 100 Lower secondary school Upper secondary school University - Bachelors Education Level University - Masters University - Doctorate Count Figure 6: Education level distribution of the human subjects. 30 28 6 Scores 2 Scores and Count Grouped by Employment Status Positive Bere 75.67% Sm Negative Before Positive After â Negative Aer =e count i Student â Unemployed Employed Retired Employment Status 1000 800 200 Figure 7: Employment status distribution of the human subjects. 22
2308.03656#89
2308.03656
[ "2303.13648" ]
2308.03427#0
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
3 2 0 2 v o N 7 ] I A . s c [ 3 v 7 2 4 3 0 . 8 0 3 2 : v i X r a # TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage Jingqing Ruanâ â ¡ [email protected] Yihong Chenâ â ¡ [email protected] # Bin Zhangâ â ¡ [email protected] # Zhiwei Xuâ â ¡ [email protected] # Tianpeng Baoâ [email protected] Guoqing Duâ [email protected] baotianpeng @sensetime.com Shiwei Shiâ [email protected] Hangyu Maoâ â [email protected] # Ziyue Li + [email protected] # Xingyu Zeng [email protected] # Rui Zhao [email protected] SenseTime Research # Abstract With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. De- spite their powers, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks, which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and then discuss the crucial capabilities neces- sary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applica- tions. Our study emphasizes the substantial potential of these models while also identifying areas that need more investigation and improvement.
2308.03427#1
2308.03427
[ "2302.13971" ]
2308.03427#1
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
The code and resources will be available on GitHub. # Introduction Large Language Model (LLM) [1] is a recent breakthrough in natural language processing (NLP) research. These models are trained on massive amounts of text data and can solve a wide range of tasks, even those that were not included in their training dataset, known as â emergingâ ability. This â + â ¡ â These authors contribute equally to this work. External discussion and ideation. These authors work as research interns at SenseTime Research. The corresponding author. © AAA. our agents +/ \â + based on different LLMs les @ ChatGLM _InternLM ChatGPT Claude Figure 1: Our LLM-based agents plan tasks and use tools. ability is especially evident in the tasks of few-shot [2] and zero-shot [3] learning, where LLMs can perform well with minimal or even no fine-tuning to adapt to a new task. However, the application of LLMs in real-world settings presents unique challenges. On the one hand, LLMs have proved to be incompetent in solving logic problems such as mathematics, and their training data is also out of date (e.g., the knowledge cutoff date for GPT-4 [4] is up to January 2022). Teaching LLMs to use tools such as calculators, calendar, or search engines can help prevent them from hallucinating [5]. On the other hand, despite their impressive problem-solving abilities, the successful integration of these models into complex systems often requires more than just task understanding - it requires the capacity to manipulate various tools and interact effectively with users. This is exemplified in systems like AutoGPT 1, BabyAGI 2, and ChatGPT-plugins 3, which leverage LLMsâ capabilities beyond merely generating well-written texts and programs. In these systems, LLMs operate as the central controller, manipulating different tools and interacting with humans, thus taking on the role of Artificial Intelligence Agents (AI Agents). In addition to being central planners, LLMs are often used as intermediaries between macro plans and low-level tool calls or as specific tools. As such, LLMs are seen as a crucial approximation of the linguistic world model in real-world systems. In this paper, we propose a structured framework for LLM-based AI Agents to evaluate the existing LLMsâ
2308.03427#0
2308.03427#2
2308.03427
[ "2302.13971" ]
2308.03427#2
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
planning and tool-using ability and discuss the necessary abilities of such LLM-based AI Agents. Furthermore, we instantiate the framework with different LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on several tasks. As shown in Figure 1, we use the Doraemon as an analogy of our LLM-based agents: Doraemonâ s magic 4D pocket consists of millions of gadgets (the Tool Set), and Doraemon needs to pick the right tools and solve tasks in a right order.
2308.03427#1
2308.03427#3
2308.03427
[ "2302.13971" ]
2308.03427#3
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Our main contributions are summarized as follows: 1. We propose a structured framework tailored for LLM-based AI Agents to evaluate the TPTU abilities of the existing open-source LLMs. 2. We design two distinct types of agents, namely, one-step agent and sequential agent, to execute the inference process of conducting sub-tasks in a once-for-all or sequential manner, respectively. We provide detailed empirical results and analysis. 3. Our study reveals significant potential in utilizing LLMs for complex tasks. Furthermore, we conclude four following potential weaknesses of LLM-based agents: failing to output in a specific format, struggling to grasp task requirements, over-utilizing one tool, and lack of summary skills. These observations could spark some insights and shed light on the areas that deserve further investigation and improvement. 1https://github.com/Significant-Gravitas/Auto-GPT 2https://github.com/yoheinakajima/babyagi 3https://openai.com/blog/chatgpt-plugins
2308.03427#2
2308.03427#4
2308.03427
[ "2302.13971" ]
2308.03427#4
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
2 Input | [ intermediate Output Final Output | Necessary Ability for LLM-based AI Agents Designed Prompt Task Instruction System Description â How much budget is required to provide a 1008 incentive for Tool Description each colleague who has worked for five years.â > Python Code > New Tools PythonREPL() with a calculator Subtask Subtask N | Tool Set Get Final Answer T Calculator() | Database() | PythonREPL() Summarization Ability â Depli ona i group of suspects i Perception Abi â i rep Learning Ability i LLMs Reflection Ability H > = Memory Ability H G oui) me Correct Result or Exception ! Error : Task Planning Ability Tool Usage Ability: Selection + Creation + Execution : High-level Plans Selected Tool |â
2308.03427#3
2308.03427#5
2308.03427
[ "2302.13971" ]
2308.03427#5
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
â â } Created Tool |â â + Tool Execution | ; Subtask 1 ; LLM iguring out how many colleague Database() > i who has worked for five years SOL Code i from the database; taking it as X. > Database() i Subtask 2 i Calculating the value of 100*X Calculator) uM : Final Answer Figure 2: The proposed framework for LLM-based AI Agents. # 2 Method To the best of our knowledge, the study of â Agentâ , â Autonomous Agentâ , â AI Agent" and â Multi- Agentâ has been a central part of AI research for decades [6â 11], aimed at understanding and building intelligent and autonomous systems, but there is currently no standardized definition for AI Agents, particularly those that are based on LLMs. In this paper, the Artificial Intelligence Agent (AI Agent) is defined as a program that employs AI techniques to perform tasks that typically require human-like intelligence. AI Agents can take many forms, from simple chatbots to complex autonomous systems that interact with their environment and make decisions in real-time. They can be trained using a variety of machine learning techniques, including supervised, unsupervised, and reinforcement learning, and can be programmed to perform specific tasks or learn from their experiences in order to improve their performance over time. # 2.1 Agent Framework We are particularly interested in the AI Agent that employs the LLM techniques (i.e., LLM-based AI Agent), due to its high efficiency and flexibility in various tasks and domains. Specifically, we design our AI Agent framework with six components as shown in Figure 2:
2308.03427#4
2308.03427#6
2308.03427
[ "2302.13971" ]
2308.03427#6
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
3 1. Task Instruction. This is the explicit input of the agent. In practical systems, the task instruction comes from human users of the systems. For example, in a human resources (HR) system, the user may give a task instruction: How much budget is required to provide a 100$ incentive for each colleague who has worked for five years? In contrast, in a criminal investigation system, the user may give a task instruction: Deploy surveillance on a group of suspects. 2. Designed Prompt. This is an additional form of input for the agent, derived from tasks that the human users anticipate the AI Agent will complete. Humans can craft specific instructions or demonstrations to steer the LLM-based AI Agents toward generating suitable responses. These guiding inputs could encompass system instructions, tool descriptions, few-shot demonstrations, chat history, or even error output.
2308.03427#5
2308.03427#7
2308.03427
[ "2302.13971" ]
2308.03427#7
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
3. Tool Set. It is another input for the agent, which refers to the set of external resources, services, or subsystems that the AI Agent can utilize to aid in its tasks. This could include databases for information retrieval [12], APIs for interacting with external systems [5], other AI models specialized for tasks such as image recognition or sentiment analysis [13], or even non-AI tools and resources such as web scraping tools or data visualization libraries [14]. The toolset expands the capabilities of the AI Agent, enabling it to access and process information beyond its internal knowledge, interact with other systems, or perform specialized tasks that it may not be capable of on its own. For example, an AI Agent might use a weather API to fetch current weather information, or a Python interpreter to solve the mathematical question.
2308.03427#6
2308.03427#8
2308.03427
[ "2302.13971" ]
2308.03427#8
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
4. LLM. This is the core component of the system that interprets the task instructions and prompts, interacts with the toolset, and generates the intermediate outputs and final answers. In this context, we utilize publicly available large language models such as ChatGPT, GPT-4 [4], InterLM [15], and others. 5. Intermediate Output. This represents the output generated by the LLM-based AI Agent after it processes the task instructions and prompts, and interacts with the toolset. There are three typical intermediate outputs: (1) the high-level plans to fulfill the original user instruction, (2) selected and created tools to fulfill each subtask in the plans, and (3) the results or errors produced after tool execution. The output can be reviewed and refined, either by the AI Agent itself or with human oversight, to ensure it is accurate and meets the requirements of the task instruction.
2308.03427#7
2308.03427#9
2308.03427
[ "2302.13971" ]
2308.03427#9
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
6. Final Answer. This is the output that the AI Agent summarizes and provides to the user after all processing (including task planning, tool usage, and possibly error feedback) has been completed. # 2.2 Agent Ability To apply LLM-based AI Agents to augment or replace human decision-making in real-world applica- tions, the agents typically require the following abilities: 1. Perception Ability: AI Agents must be able to perceive the task instruction from human and system specifications. 2. Task Planing Ability: AI Agents should have the capacity to create a step-by-step plan for complex task composition based on the perceived instruction and specifications. This usually involves the generation of critical subtask sequences, and the ability to adjust the plan dynamically in response to changes in the task or environment.
2308.03427#8
2308.03427#10
2308.03427
[ "2302.13971" ]
2308.03427#10
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
3. Tool Usage Ability: On the one hand, AI Agents should possess the capacity to select a variety of existing tools or resources to execute complex tasks. On the other hand, AI Agents should create new tools by converting the task requirements. This ability enables the AI Agent to extend its capabilities beyond LLM itself and the existing tools by leveraging the vast resources available in the digital world. Finally, AI Agents should be able to execute the selected or created tools for truly grounding the human request based on the resources and constraints of systems.
2308.03427#9
2308.03427#11
2308.03427
[ "2302.13971" ]
2308.03427#11
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
4. Learning/Reflection/Memory (from Feedback): AI Agents should be capable of learning from feedback, including correct results and exception errors. They should incorporate 4 memory, such as logging or chat history, and reflection to adapt their plans or decisions. This allows the agents to improve their performance and efficiency in task execution continuously. 5. Summarization: After several rounds of interaction with humans, tools, and systems, AI agents can ultimately complete the original task provided by the users. At this point, AI agents should be able to summarize the interaction history and provide a final answer that is concise and easy to understand for the users. To endow AI Agents with the aforementioned abilities, some techniques that can be used include chain-of-thought (CoT) and vector databases, as shown in Table 1.
2308.03427#10
2308.03427#12
2308.03427
[ "2302.13971" ]
2308.03427#12
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Table 1: A simple illustration of the techniques for endowing the key ability. Ability Possible Techniques Perception Task Planing Tool Usage (Selection/Creation/Execution) Learning/Reflection/Memory Summarization Multi-input Fusion Zero-shot CoT and Few-shot CoT Text Matching/Code Generation/ Action Grounding RLHF/Multi-agent Debate/ Vector Database Attention Mechanism and Natural Language Generation # 2.3 Agent Design Task planning and tool usage represent the cornerstone of LLMâ s abilities. Others like perception, learning/reflection/memory (from feedback), and summarization are indeed critical, but they primarily serve to enhance and support these two core competencies. Therefore, concentrating on these two key competencies - Task Planning and Tool Usage (TPTU for short) - we have devised two distinct types of AI agents, as depicted in Figure 3:
2308.03427#11
2308.03427#13
2308.03427
[ "2302.13971" ]