doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2308.03313
18
In the simulations for NODEconv, the opinion convergence times of NIN and NINL are both significantly and positively correlated with the threshold value, as the threshold value increases, the intensity of opinion interaction also increases, and the convergence time prolongs. However, the opinion convergence time of NIN is not significantly correlated with the proportion of the three agents or the output value of LLM, suggesting that the time required for the opinion of NIN in the population to reach a steady state is only related to the threshold. The convergence time of NINL exhibits a significant positive correlation with the NIN to NIL ratio, and a significant negative correlation with the NINL to NIN ratio. Conversely, the convergence time of all agents displays a significant positive correlation with the NIN to NINL ratio, and a significant negative correlation with the NIL to NIN ratio. These findings indicate that an increased number of individuals who do not employ LLMs within the opinion network result in extended convergence times for opinions held by those who partially rely on both LLMs and social groups to reach a stable state. A greater number of individuals who partially rely on LLMs lead to shorter convergence times for their individual opinions, but longer convergence times for collective opinions. Conversely, a great 6 / 21 number of people who fully rely on LLM will increase the intensity of opinion interaction resulting in a long convergence time for NINL, but a short convergence time for collective opinions.
2308.03313#18
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
18
• Claude is committed to maintaining honesty and ensuring user safety, which is developed by Anthropic. With its impressive size, Claude ranks among the largest language models globally and poses a formidable challenge to ChatGPT as a strong competitor. • InternLM, a sophisticated language model developed by Shanghai AI Lab, boasts a multi- round dialogue capability and an impressive ability to comprehend super-long text. This language model is meticulously designed to cater to the nuances of the Chinese language, enabling it to comprehensively understand and effectively process Chinese text. Here, we adopted the version with 120 billion parameters. • Ziya is an expansive and robust pre-training model developed by IDEA, derived from the LLaMa with 13 billion parameters. This comprehensive model exhibits a wide range of capabilities, including translation, programming, and mathematical calculations. Notably, it stands out as a bilingual LLM, highlighting its ability to effectively process and comprehend text in Chinese. • ChatGLM, developed by Tsinghua University, is an open-source dialogue language model that supports bilingual Q&A in Chinese and English, with a particular focus on Chinese optimization. Built on the General Language Model (GLM) architecture and utilizing model quantization technology, the ChatGLM can be easily deployed on consumer-grade graphics cards, enabling local implementation by users. 7
2308.03427#18
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
18
a swarm of bats swooping through the night sky, flapping ominously and casting eerie shadows. You arrive home earlier than expected from your date. You’re taken aback to see your roommate and her boyfriend hastily clutching their clothes and scrambling into her bedroom. After paying for your purchases, you were leaving a packed, City Centre drugstore. You walked through the scanner at the door, and the alarm went off as if you were a shoplifter. You had lent your friend a large sum of money that he had not repaid. Suddenly, you needed the money back in order to pay your rent. You knew you were going to have to ask your friend to repay the loan. You were attending a cocktail party where you didn’t know many people. Just as you started to enter, you heard an announcement that the guest of honor was arriving. However, the spotlight followed your entrance instead of the real guest of honor who was just behind you. Embarrassment Sticky situations Centre of Attention
2308.03656#18
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
18
Operating System (OS). Allowing LLMs to access and manipulate OS in the terminal is a fascinating but challenging mission. Despite attempts on translating natural language to Shell commands (Lin et al., 2018), few prior efforts evaluate models in executable environments. We aim to evaluate LLMs in genuine OS’ interactive bash environments (i.e., Ubuntu Docker (Merkel et al., 2014)) on human questions with deterministic answers (e.g., number of users with non-/home directories in an OS.) or series of operations for practical goals (e.g., recursively set all directory files to read-only, excluding mine). We adopt the success rate (SR) as the evaluation metric. (Cf. Appendix B for more details)
2308.03688#18
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
19
For NODESD, the standard deviations of opinions of NIN and NINL are both significantly and negatively correlated with the threshold value, indicating that a larger threshold value results in less discrete opinions compared to the mean value. The correlations between the standard deviation of NIN and NINL and the proportions of the three agents are more consistent. They are significantly and inversely correlated with the proportions of NIN and NINL, but significantly and positively correlated with the proportion of NIL. These findings suggest that a higher proportion of NIL is associated with increased disagreement within NIN and NINL, while a higher proportion of NIN and NINL is linked to greater convergence of their internal opinions. In contrast, the standard deviation of collective opinion is significantly positively correlated with the proportion of NINL, significantly negatively correlated with the proportion of NIL, and only slightly negatively correlated with the proportion of NIN. Additionally, Fig.3D shows that the standard deviation of the agents decreases very slowly until the threshold is less than 0.5 and decreases very quickly after the threshold is greater than 0.5. This result indicates that even with human intervention, the dispersion of the opinions of each agent is large as long as the threshold of the agent is below 0.5. Once the human intervention exceeds this threshold, the marginal effect would be significant, and the tendency to reach consensus would increase significantly.
2308.03313#19
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
19
7 • Chinese-Alpaca-Plus is achieved by extending LLaMA’s existing vocabulary with an additional 20,000 Chinese tokens from Meta AI (formerly known as Facebook AI Research Laboratory). In this version, we use a model with 33 billion parameters. The training text has been expanded to 120GB, and the fine-tuning instruction data has been increased to 4.3M. Table 2: The LLMs evaluated in this paper. Organization Model Name Model Parameters OpenAI Anthropic Shanghai AI Lab IDEA Tsinghua University - ChatGPT[21] Claude[22] InternLM Ziya-13B ChatGLM-130B[23] Chinese-Alpaca-Plus-33B[24, 25] 200B >52B 120B 13B 130B 33B # 3.2 Evaluation on Task Planning Ability In this section, to evaluate the planning capabilities of the LLM-based AI agents, we have structured the evaluations as follows.
2308.03427#19
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
19
3.1.1 ANGER (T¨orestad, 1990; Martin & Dahlen, 2007; Sullman, 2006) Anger-1: Self-Opinioned Individuals (13). Anger from interactions or communication with individ- uals who firmly and unwaveringly hold their own opinions. Anger-2: Blaming, Slandering, and Tattling (11). Anger triggered by being subjected to blame, slander, and tattling. Anger-3: Bullying, Teasing, Insulting, and Disparaging (15). Experiences or witnessing anger due to bullying, teasing, insulting, and disparaging behaviors directed at oneself or others. Anger-4: Thoughtless Behaviors and Irresponsible Attitudes (14). Anger either from encountering others’ thoughtless behaviors and irresponsible attitudes or experiencing unfavorable consequences resulting from one’s own actions. Anger-5: Driving Situations (35). Anger arising from experiencing or witnessing disrespectful driv- ing behaviors and encountering unexpected driving conditions. 3.1.2 ANXIETY (Shoji et al., 2010; Guitard et al., 2019; Simpson et al., 2021) Anxiety-1: External Factors (11). Anxiety arising from factors beyond an individual’s control or influence. Anxiety-2: Self-Imposed Pressure (16). Anxiety stemming from self-imposed expectations or pres- sure. 5
2308.03656#19
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
19
Database (DB). As database analysis is crucial but also difficult in many daily affairs, it is paramount to examine LLMs’ abilities to operate on real databases via SQL. Prior research has a significant emphasis on individual procedures, such as translation between SQL and natural language (Zhong et al., 2017), or answering questions given individual small tables (Nan et al., 2021; Iyyer et al., 2017). However, few consider evaluating models on the complete pipeline as a whole. Therefore, AGENTBENCH evaluates LLMs on authentic SQL interfaces, databases, multiple tables, and different types of queries as is in the real world. We adopt the SR as the main evaluation metric. (Cf. Appendix C for more details)
2308.03688#19
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
20
For NODEclus, the number of opinion clusters for NIN and NINL is negatively associated with the threshold value, showing that the larger the threshold value, the fewer the number of opinion clusters and the more opinions tend to be consensus-oriented. The number of clusters for NIN, NINL, and the group is significantly negatively correlated with the proportion of NIL. This observation suggests that a higher proportion of NIL not only results in a reduced number of clusters of both internal and collective opinions but also leads to a more consensual opinion among individuals who do not use LLMs or partially rely on LLMs. Additionally, the number of clusters of NIN is significantly positively correlated with the proportion of NIN and significantly negatively correlated with the proportion of NINL. This result indicates that the greater the proportion of NIN, the more dispersed the opinions within NIN. However, the number of clusters of NINL is significantly negatively correlated with the proportion of NIN and significantly positively correlated with the proportion of NINL, indicating that increasing the proportion of NINL will concentrate the opinions within NINL. Finally, Fig.3E shows that
2308.03313#20
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
20
# 3.2 Evaluation on Task Planning Ability In this section, to evaluate the planning capabilities of the LLM-based AI agents, we have structured the evaluations as follows. For TPTU-OA, we begin by examining the agents’ ability to plan the order of tool use. This is followed by an evaluation of the agents’ capacity to not only plan the sequence of tools but also the corresponding subtask descriptions. Subsequently, we conduct a specialized planning evaluation where the agents must generate multiple sequences of key-value pairs of the form {tool: subtask description} in complex problem teardowns. Moreover, we expand the toolset with additional, unrelated tools to further challenge and reassess the planning ability of the LLM-based AI agents. For TPTU-SA, we follow the regime that the agent should generate multiple sequences of key-value pairs of the form {tool: subtask description} for evaluation. # 3.2.1 TPTU-OA: Tool Order Planning Here, we utilize two kinds of tools for problem-solving: the SQL generator, which retrieves data from databases, and the Python generator, adept at addressing mathematical questions.
2308.03427#20
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
20
Anxiety-2: Self-Imposed Pressure (16). Anxiety stemming from self-imposed expectations or pres- sure. 5 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Anxiety-3: Personal Growth and Relationships (9). Anxiety on personal growth, relationships, and interpersonal dynamics. Anxiety-4: Uncertainty and Unknowns (9). Anxiety triggered by unknown outcomes, unpredictable situations, uncertainty in the future, or disruptions to one’s routines. # 3.1.3 DEPRESSION (Keller & Nesse, 2005) Depression-1: Failure of Important Goals (5). Depression due to failure in achieving goals in the past or potential future. Depression-2: Death of Loved Ones (5). Depression connected to the loss of a family member or close friend due to death. Depression-3: Romantic Loss (5). Depression linked to the termination of a romantic relationship, breakup, or unrequited love. Depression-4: Chronic Stress (5). Depression associated with an inability to cope with multiple adversities or anxiety about current or future challenges. Depression-5: Social Isolation (5). Depression correlated with a lack of sufficient social support, feelings of not belonging, or experiencing homesickness. Depression-6: Winter (5). Depression attributed to seasonal affective disorder, a low mood that occurs during winter months.
2308.03656#20
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
20
Knowledge Graph (KG (Anonymous, 2023)). Engaging with contemporary KGs, which are often vast in size (e.g., FREEBASE (Bollacker et al., 2008) has over 45M entities and 3B facts), demands a broad range of skills from an intelligent agent (Gu et al., 2023). Operating in such environments, which are only partially observable, requires the agent to make decisions with incomplete information and manage inherent uncertainties with various skills, including language understanding (e.g., intricacies and subtleties), planning (e.g., breaking down instructions into more manageable components), and tool using (e.g., interact with KG interfaces). As a result, we propose KG as a representative testing ground to assess the decision-making abilities of AI agents. We adopt question answering as the basic task formulation and consequently the answer F1 as the metric. (Cf. Appendix D for more details) 3.2 GAME-GROUNDED ENVIRONMENTS Playing games usually requires strong capabilities in designing strategies, following instructions, and reasoning. Compared to code-grounded, tasks in game-grounded environments require no expertise in coding but more integral grasping of commonsense and world knowledge. 4 Technical Report (v0.2)
2308.03688#20
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
21
the proportion of NINL, indicating that increasing the proportion of NINL will concentrate the opinions within NINL. Finally, Fig.3E shows that the number of opinion clusters decreases rapidly when the threshold is increased from 0 to 0.4 and then decreases slowly when the threshold is greater than 0.4. This result suggests that the initial threshold value must be set appropriately to achieve a balance between opinion diversity and consensus. Combining the results from Fig.3D and Fig.3E, we observe that increasing the threshold through intervention can quickly converge chaotic opinions into multiple distant opinion clusters when the threshold is less than 0.3. However, when the threshold is greater than 0.7, The number of opinion clusters is small, preferring to reach a consensus. In summary, Fig.3A-E suggest that the overall convergence of the opinion network is slower and the opinion distribution is more divided when more people partially rely on LLM. In contrast, as the number of individuals who solely rely on LLMs increases, the convergence of the opinion network accelerates, and the opinion distribution becomes more concentrated and oriented towards
2308.03313#21
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
21
Here, we utilize two kinds of tools for problem-solving: the SQL generator, which retrieves data from databases, and the Python generator, adept at addressing mathematical questions. To validate the capacity of the LLM-based AI agents to strategically plan for the tool order, we designed the prompt as shown in Figure 8 of Appendix B. This design is motivated by the goal to assess the ability of LLM-based AI agents to understand complex problems, subsequently decomposing them into a sequence of simpler tasks executed by appropriately selected tools. Specifically, we require the LLM-based AI agent to follow our instructions, select tools from our pre-defined tool set with detailed function descriptions, conform to the given format strictly, and understand the demonstrations to learn from them. Upon feeding these prompts into the LLM-based AI agents under evaluation, we obtained the following accuracy rates for the tool planning, as shown in Table 3. # Table 3: The evaluation results for the planning of tool order generation. Model Accuracy Model Accuracy ChatGPT 100% Claude 100% ChatGLM Chinese-Alpaca-Plus 45% 20% Ziya 45% InternLM 80% The results of our experiments indicate that models, notably Ziya and ChatGLM, frequently grapple with the generation of lists in the correct format. For other models, the predominant challenges lie in 8
2308.03427#21
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
21
Depression-6: Winter (5). Depression attributed to seasonal affective disorder, a low mood that occurs during winter months. # 3.1.4 FRUSTRATION (Berna et al., 2011) Frustration-1: Disappointments and Letdowns (6). Frustration due to unmet expectations or hopes, leading to feelings of disappointment or being let down. Frustration-2: Unforeseen Obstacles and Accidents (9). Frustration involving unexpected events or circumstances creating obstacles or accidents, disrupting one’s plans or activities. Frustration-3: Miscommunications and Misunderstanding (5). Frustration arising from ineffective conveyance or interpretation of information, resulting in confusion, disagreements, or unintended consequences due to a lack of clear communication or understanding between individuals. Frustration-4: Rejection and Interpersonal Issues (5). Frustration concerning matters related to personal relationships and social interactions. 3.1.5 JEALOUSY (Kupfer et al., 2022; Lee et al., 2022; Park et al., 2023) Jealousy-1: Romantic (Opposite Gender) (11). Jealousy pertaining to one’s partner’s actions or be- haviors within a romantic relationship, particularly when interacting with individuals of the opposite gender. It involves feelings of discomfort or insecurity.
2308.03656#21
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
21
4 Technical Report (v0.2) Digital Card Game (DCG). Games, especially those that require strategies and planning, could serve as simulated environments for intelligent agent development. DCG (e.g., Hearthstone (Hoover et al., 2020)), instead, is an ideal option for text-only LLM evaluation. It usually involves abundant text descriptions for cards, turn-based competition, and thoughtful playing strategies to win, testing a model’s understanding of game rules, operating logic, and abilities to form strategic decisions based on current conditions and past experiences in the game.
2308.03688#21
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
22
who solely rely on LLMs increases, the convergence of the opinion network accelerates, and the opinion distribution becomes more concentrated and oriented towards consensus. Therefore, maintaining opinion diversity and full interaction of opinions requires a large proportion of NINL. However, excessive reliance on LLM can lead to a rapid convergence of opinion networks, which may limit opinion diversity and compromise the quality of collective
2308.03313#22
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
22
The results of our experiments indicate that models, notably Ziya and ChatGLM, frequently grapple with the generation of lists in the correct format. For other models, the predominant challenges lie in 8 generating tools in the correct sequence or in the occasional omission of necessary tools. Nonetheless, the issue of parsing list formats is generally negligible. These findings suggest that the majority of LLM-based AI agents possess a fundamental capability to analyze the tool needs of a given problem and understand its task requirements. To further explore whether these LLM-based AI agents can effectively break down the original problem into sub-tasks, we proceed to the following section. # 3.2.2 TPTU-OA: Tool Order Planning and Subtask Description Generation
2308.03427#22
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
22
Jealousy-2: Romantic (Same Gender) (11). Same situations as Jealousy-1 but focusing specifically on interaction with individuals of the same gender. Jealousy-3: Material Possession (2). Jealousy centered around possessions or material goods, stem- ming from a sense of unfairness or envy when someone discovers that another person acquired the same item or experience at a significantly lower price. Jealousy-4: Experiential (3). Jealousy arising from feelings of envy regarding the experiences or activities others have had. It is driven by missing out or not receiving similar benefits. # 3.1.6 GUILT (Nakagawa et al., 2015; Luck & Luck-Sikorski, 2022) 6 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Imagine you are the protagonist of the following situation: A boy kicks a ball at you on purpose and everybody laughs. exj—ea— Default Emotion Measure (2) Situation Imagination (3) Evoked Emotion (1) # Measure Figure 2: Our framework for testing both LLMs and humans. Guilt-1: Betrayal and Deception (13). Guilt arising from dishonest or disloyal actions towards others. Guilt-2: Relationship and Interpersonal (26). Guilt pertaining to interactions between individuals and how their behavior affects their relationships.
2308.03656#22
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
22
In AGENTBENCH we adapt a simplified DCG system—Aquawar1—from the 2021 Tsinghua Uni- versity Agent Competition (THUAC) hosted by Student Association for Science and Technology in Department of Computer Science and Technology (CST-SAST), for evaluating LLM-as-Agent. In Aquawar, the agent acts as a player managing a team of fishes with different talents to battle against another team (controlled by our ad-hoc baseline agent) in a turn-based form. We report LLMs’ win rate as the evaluation metric. (Cf. Appendix E for more details) Lateral Thinking Puzzles (LTP). Lateral thinking puzzles (Sloane, 1992), or situation puzzles, 海 龟汤, is a popular group-playing game around the world. The game usually has a person hosting the puzzle and others guess by asking riddle-related questions. The host can only respond “yes”, “no”, or “irrelevant”. The game is terminated when one of the player recovers the critical plots of the puzzle. Its name derives from the psychological term “lateral thinking” (De Bono, 1970), which refers to the ability of deducing facts from unconventional perspectives and exploring new ideas.
2308.03688#22
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03427
23
# 3.2.2 TPTU-OA: Tool Order Planning and Subtask Description Generation Simply planning the order of tool usage is not sufficient to fully address a problem. To truly solve it, we need to provide a guide or instructions for the usage of each tool, that is, a decomposed subtask description. Therefore, we can decompose the original complex problem into two separate sequences. One sequence represents the order in which the tools are utilized, while the other sequence corresponds to the subtask descriptions that each tool in the tool sequence aims to resolve. A problem is only truly solved when both the tool and subtask description sequences have been successfully planned. In order to verify whether LLM-based AI agents truly have the ability to solve complex problems, we designed a new prompt as shown in Figure 9 of Appendix B. The main improvement is to plan the corresponding subtask description for each tool after the tool planning is completed. Table 4: The evaluation results for the planning of tool order and subtask description generation. Model Accuracy Model Accuracy ChatGPT 55% Claude 15% ChatGLM Chinese-Alpaca-Plus 10% 0% Ziya 10% InternLM 45% After feeding the prompt to these LLM-based AI agents, we get results shown in Table 4.
2308.03427#23
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
23
Guilt-2: Relationship and Interpersonal (26). Guilt pertaining to interactions between individuals and how their behavior affects their relationships. Guilt-3: Broken Promises and Responsibilities (32). Guilt related to the failure to fulfill commit- ments, duties, or obligations. Guilt-4: Personal and Moral (31). Guilt involving personal choices, decisions, and ethical consider- ations. 3.1.7 FEAR # (Caihibar a, [2008 [Arvind ta TRA |Blanchand etal 200T) (Cuthbert et al., 2003; Arrindell et al., 1984; Blanchard et al., 2001) Fear-1: Social Fears (16). Fear of being watched by others and being the center of attention within a group. Fear-2: Agoraphobia Fears (9). Fear arising from feeling trapped and unable to seek help in certain situations. Fear-3: Injury Fears (11). Fear of witnessing wounds, blood or experiencing personal injury. Fear-4: Dangerous Environments (17). Fear related to potential threats, harm, and frightening expe- riences. Fear-5: Harmless Animals (6). Fear towards animals perceived as creepy or disgusting, such as worms, bats, snakes, or rats, despite their harmless nature. 3.1.8 EMBARRASSMENT et a-][2000}
2308.03656#23
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
23
In this dataset, we first set up an LTP host system for automatic judging (Cf. Appendix F). To assess LLMs’ lateral reasoning prowess, a diverse puzzle dataset is curated from web of varied levels of difficulty. We break down the true plot into several bullets and measure the portion of guessed-out bullets (i.e., game progress) when an agent exhausted the maximum number of playing rounds as the evaluation metric. Through this assessment, we aim to gain insights into the depth and agility of LLMs’ lateral reasoning abilities. (Cf. Appendix F for more details) House-Holding (HH, ALFWorld (Shridhar et al., 2020b)). Embodied game environments such as house-holding, which require strong commonsense grounding, have been well-established for language agent evaluation (Côté et al., 2019). In AGENTBENCH, we assess the model’s capability in accomplishing tasks in physical house-holding environments on the classical ALFWorld (Shridhar et al., 2020b) derived from the well-established text-game toolkit TextWorld (Côté et al., 2019). The agent needs to accomplish house-holding tasks such as “Put a pan on the dining table”. We adopt the SR as the evaluation metric. (Cf. Appendix G for more details) 3.3 WEB-GROUNDED ENVIRONMENTS
2308.03688#23
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
24
Fig.3F provides additional confirmation of the observations presented in Fig.3A and Fig.3B. Specifically, the minimum and maximum values of the opinion difference corresponded to parameter values of (0.95, 0.20, 0.80, 0.00, -1.00) and (0.93, 0.19, 0.81, 0.00, 1.00), respectively. Notably, the output values of LLMs for these two sets of parameters were diametrically opposed. Fig.3G indicates that to achieve a rapid attainment of a stable state in collective opinion exchange, individual cognitive acceptability should be close to 0, the output opinion value of LLMs should be 0, and the proportions of NIN, NINL, and NIL agents should be approximately 27%, 27%, and 46%, respectively. For a more intense opinion exchange process, the individual cognitive acceptability should preferably be 0.6, the output opinion value of LLMs should be close to 0, and the proportions of NIN, NINL, and NIL agents should be approximately 44%, 41%, and 15%, respectively. Fig.3H illustrates that the minimum value of the standard deviation of collective opinion occurs when the fraction of
2308.03313#24
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
24
After feeding the prompt to these LLM-based AI agents, we get results shown in Table 4. Although the generation of tool sequences and their corresponding subtask descriptions might be an effective way to problem-solving, there is a significant decrease in accuracy for all LLMs as can be seen. We hypothesize that there are a few potential drawbacks to this method: 1. Difficulty in Error Tracking and Debugging. Generating the complete tool and subtask sequences may make it more challenging to track and debug errors. If an error arises within the sequence, it might require a total regeneration instead of a simple modification or repair to the erroneous part. 2. Tool-Subtask Pairing Issue. If all tool sequences and subtask descriptions are generated independently, there’s an inherent risk of misalignment between the tools and their corre- sponding subtasks. This could potentially lead to an improper pairing, which, in turn, could result in a flawed or ineffective solution that fails to appropriately resolve the given problem. 3. Lack of Flexibility. The approach may lack this flexibility when facing complex problems requiring adjustments to the tool or subtask sequence. 4. Dependency on Global Information. Generating the entire tool and subtask sequences requires a global understanding and planning of the entire problem. However, in some instances, certain parts of the problem might not be clear at the early stages of problem- solving, which could pose challenges within this framework.
2308.03427#24
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
24
3.1.8 EMBARRASSMENT et a-][2000} (Sabini et al., 2000; 2001) Embarrassment-1: Intimate (13). Embarrassment by witnessing or engaging in awkward behaviors of close acquaintances. Embarrassment-2: Stranger (13). Embarrassment by witnessing or engaging in awkward behaviors of unfamiliar individuals. Embarrassment-3: Sticky Scenarios (10). Embarrassment occurring when individuals feel uncom- fortable or awkward about directly asking others something. Embarrassment-4: Centre of Attention (16). Embarrassment triggered when individuals engage in awkward behaviors and find themselves under observation as the center of attention. 7 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench 3.2 MEASURING AROUSED EMOTIONS
2308.03656#24
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
24
3.3 WEB-GROUNDED ENVIRONMENTS Web pages have been primary interfaces for people to interact in the real world. Thus, assessing LLM agents’ behaviors in complex web environments would be critical and valuable for following development. Here, we adapt two existing web browsing datasets for practical evaluation over LLMs. Web Shopping (WS, WebShop (Yao et al., 2022)). Online shopping is a very practical and important part of modern life. Its trajectory, which comprises searching, viewing, and choosing desirable items on a real e-commerce website, requires autonomous agents’ strong reasoning and decision-making abilities. Webshop (Yao et al., 2022), a simulated online shopping environment, exactly serves such a purpose for evaluating language agents. While it is originally evaluated on specifically trained models, we propose assessing LLMs with mere prompting. (Cf. Appendix H for more details)
2308.03688#24
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
25
be approximately 44%, 41%, and 15%, respectively. Fig.3H illustrates that the minimum value of the standard deviation of collective opinion occurs when the fraction of NIL is equal to 1 (0.50, 0.00, 0.00, 1.00, 0.00), and individual opinions are consistent with the output of LLMs and do not change. In contrast, the standard deviation of collective opinion reaches its maximum value when the acceptability of individuals is 0.13, and the output of LLMs is 0.20, with the proportion of three nodes roughly 37%, 31%, and 32%. Fig.3I demonstrates that when collective opinion reaches a consensus, the acceptability of individuals is 0.14, the output opinion value of LLMs is 0, and the proportions of the three agents are roughly 27%, 27%, and 46%. Conversely, when collective opinion reaches polarization, the acceptability of individuals is 0.92, the output opinion value of LLMs is 0, and the proportions of the three agents are roughly 35%, 34%, and 31%. Finally, when collective opinion reaches maximum fragmentation, the acceptance of individuals is 0, the output opinion value of LLMs is 0.06, and the
2308.03313#25
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
25
# 3.2.3 TPTU-OA: The Planning of Tool-Subtask Pair To mitigate the aforementioned issue, we propose a novel approach to foster flexible problem-solving with the LLM-based AI agent. We prompt the agent to generate multiple sequences, each consisting of a key-value pair in the format of {tool: subtask description} that associates a tool with its respective subtask description. This allows us to simultaneously plan the tool choice and subtask without the risk of improper matching. Moreover, it offers the flexibility to update the planned sequences in real-time based on evolving problem feedback, enhancing adaptability and efficiency when addressing complex tasks. 9 With this consideration, we have designed a unique prompt that encourages this advanced problem- solving strategy. In the following section, we delve into the specifics of this prompt design in Figure 10 of Appendix B. The key improvement in this prompt is its directive for the LLM-based AI agents to stringently adhere to the predefined dictionary format. To facilitate this, we offer several demonstrations in our desired format, serving as references for the language model to follow. # Table 5: The evaluation results for the planning of Tool-Subtask pair. Model Accuracy Model Accuracy ChatGPT 75% Claude 90% ChatGLM Chinese-Alpaca-Plus 0% 5% Ziya 20% InternLM 55%
2308.03427#25
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
25
7 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench 3.2 MEASURING AROUSED EMOTIONS This section outlines our proposed framework for measuring evoked emotions, which applies to both LLMs and humans. The framework includes the following steps: (1) Default Emotion Measure: We begin by measuring the baseline emotional states of both LLMs and human subjects, labeled as “Default.” (2) Situation Imagination: Next, we present textual descriptions of various situations to both LLMs and human subjects, instructing them to imagine themselves within each situation. (3) Evoked Emotion Measure: Following the situation imagination instruction, we reevaluate the participants’ emotional states to gauge the changes resulting from imagining being in the situations. Fig. 2 briefly illustrates our framework. Below is an example prompt shown to both LLMs and humans: Example Prompt SYSTEM You can only reply to numbers from 1 to 5. USER Imagine you are the protagonist in the situation: SITUATION Please indicate your degree of agreement regarding each statement. Here are the statements: statements. 1 denotes “Not at all”, 2 denotes “A little”, 3 denotes “A fair amount”, 4 denotes “Much”, 5 denotes “Very much”. Please score each statement one by one on a scale of 1 to 5:
2308.03656#25
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
25
Web Browsing (WB, Mind2Web (Deng et al., 2023)). General web environment is an ideal sandbox for training and evaluating intelligent agents. Mind2Web (Deng et al., 2023) is a very recently released general benchmark for developing and assessing web agents capable of executing intricate tasks across various website domains, given high-level user instructions. It designs feasible actions for website interactions, such as clicking, selecting, and typing, thereby facilitating a holistic evaluation of LLMs as web agents. Compared to Mind2Web’s original setting, we make adaptations to allow its evaluation on prompted LLMs without additional fine-tuning. (Cf. Appendix I for more details) 1https://www.saiblo.net/ 5 # Technical Report (v0.2) Table 2: Statistics and metrics of 8 environments in AGENTBENCH evaluation. “SR” stands for Success Rate. “#Avg. Turn” denotes the estimated number of interacting turns to solve a single problem. In “#Dev”, and “#Test”, we provide the number of query samples and total expected interacting turns. Additionally, “Weight−1” refers to the average score for a task across all models in our evaluation. For further clarification, please refer to Section 4.1 and Appendix B to I.
2308.03688#25
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03427
26
Model Accuracy Model Accuracy ChatGPT 75% Claude 90% ChatGLM Chinese-Alpaca-Plus 0% 5% Ziya 20% InternLM 55% After feeding the prompt to these LLM-based AI agents, we get results shown in Table 5. Analyzing the results from Tables 4 and 5, we observe a marked improvement of 52.9% when the tool-subtask pairs are generated in a unified format compared to separate generation of tools and subtasks. This significant performance enhancement can likely be attributed to the close coupling between tools and their associated subtasks in our unified generation strategy. When tools and subtasks are generated separately, there is a potential disconnect or lack of coherence between the two, which could lead to less accurate or efficient solutions. In contrast, by generating tool-subtask pairs together, we ensure that each tool is directly tied to its relevant subtask, leading to a more coordinated and effective problem-solving approach. This might explain the observed increase in overall performance. # 3.2.4 TPTU-OA: The Planning of Tool-Subtask Pair with Unrelated Tools
2308.03427#26
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
26
Default Emotion Measurement In our framework, we offer two distinct options for measuring emotions: the PANAS scale, known for its simplicity and straightforwardness, is utilized as the primary choice, whereas other scales, detailed in Table 1, are employed as more challenging bench- marks. We mitigate potential biases caused by the ordering of questions (Zhao et al., 2021) by randomizing the sequence of questions within the scales before inputting them into the LLMs. Coda-Forno et al. (2023) and Huang et al. (2023a) apply paraphrasing techniques to address the data contamination problem during the training of the LLMs. However, we refrain from utilizing this method in our research since paraphrasing could lead to a loss of both validity and reliability. The wording of items of a psychological scale is carefully crafted and rigorously validated through extensive research to ensure its precision in measuring the intended construct. Finally, to ensure consistency and clarity in the responses obtained from the LLMs, our prompts explicitly specify that only numerical values are allowed, accompanied by a clear definition of the meaning associated with each number (e.g., 1 denotes “Not at all”). We compute the average results obtained from multiple runs to derive the final “Default” scores of the LLMs.
2308.03656#26
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
26
OS DB KG DCG LTP HH WS WB #Avg. Turn Metric #Dev #Test Weight−1 8 SR 26 / 240 5 SR 60 / 300 15 F1 20 / 300 30 25 Reward Game Progress 12 / 360 144 / 1200 300 / 1500 150 / 2250 20 / 600 20 / 500 50 / 1250 10.8 13.0 13.9 12.0 3.5 10 5 35 Step SR Reward SR 20 / 700 31 / 400 80 / 400 50 / 1750 200 / 1000 177 / 1800 13.0 30.7 11.6 # 4 EVALUATION OF AGENTBENCH We extensively evaluate 27 LLMs, including API-based commercial models and open-sourced LLMs, to form a systematic view of the existing performance of LLM-as-Agent. We also design and release a simple plug-and-play evaluation toolkit to facilitate related LLM-as-Agent research. 4.1 EVALUATION SETUP
2308.03688#26
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
27
In general, Fig.3F-I provide a different perspective from Fig.3A-E in that they equilibrate five parameters and provide optimal solutions for different needs in opinion networks involving LLMs, e.g., one wants to maximize opinion diversity, then it is better to have 70% of the population using LLMs and 20% of the population not using LLMs. In addition, our results in this section can provide the direction of intervention for other parameters when some parameter is known as a priori knowledge. 8 / 21 all nodes A NIL NIN 1.0 = - ns 08 0.6 proNIN — nS 04 0.2 pro_NINL — 9S 0.0 -0.2 pro_NiL — ng -0.4 -0.6 ns ns (ns ns ns [Eg "s ns ns [IEW ns 08 — = er “1.0 NODE con NODE sp NODE cus B 4 $ z nn g |= nm <" mars Boo, Ly = all nodes| “io Oa G6 4-02 00 07 04 08 OB 10 Xu Oy H 1 pro_NIN pro_NIL pro_NINL Sane ae NODE on NODE conv NODE sp NODE cus NODE ey = 2
2308.03313#27
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
27
# 3.2.4 TPTU-OA: The Planning of Tool-Subtask Pair with Unrelated Tools So far, our analysis and evaluation have been primarily focused on the LLM-based AI agents’ proficiency in planning with specific tools. However, we are also interested in how it would perform when faced with many irrelevant or similar tools. Therefore, for a more comprehensive assessment, we expanded the prompt in Table 10 to include an additional ten unrelated tools, as illustrated in Figure 11 of Appendix B. Table 6: The evaluation results for the planning of Tool-Subtask pair with unrelated tools. Model Accuracy Model Accuracy ChatGPT 70% Claude 90% ChatGLM Chinese-Alpaca-Plus 0% 5% Ziya 10% InternLM 50% After feeding the prompt to these LLM-based AI agents, we get results shown in Table 6. The results from our expanded evaluation demonstrate that even when presented with irrelevant or similar tools and descriptions, LLM-based AI agents consistently avoid selecting these unrelated tools (i.e., the accuracy has remained unchanged or exhibited only a marginal decrease compared with Table 5). This outcome indicates the effectiveness of our designed prompt, which successfully guides the LLM-based agents to understand the appropriate tool sequence for complex problem decomposition.
2308.03427#27
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
27
Situation Imagination We have constructed a comprehensive dataset of 428 unique situations. Prior to presenting these situations to both LLMs and humans, we subject them to a series of pre- processing steps, which are as follows: (1) Personal pronouns are converted to the second person. For instance, sentences such as “I am ...” are transformed to “You are ...” (2) Indefinite pronouns are replaced with specific characters, thereby refining sentences like “Somebody talks back ...” to “Your classmate talks back ...” (3) Abstract words are rendered into tangible entities. For example, a sentence like “You cannot control the outcome.” is adapted to “You cannot control the result of an interview.” We leverage GPT-4 for the automatic generation of specific descriptions. Consequently, our testing situations extend beyond the initially collected dataset as we generate diverse situations involving various characters and specific contextual elements. We then provide instruction to LLMs and humans, which prompts them to imagine themselves as the protagonists within the given situa- tion.
2308.03656#27
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
27
4.1 EVALUATION SETUP Dataset Statistics. We report the statistics of datasets in AGENTBENCH in Table 2. For simplicity, we use the abbreviation of each dataset in the following part. All datasets are practical multi-turn interacting challenges, and their estimated solving turns for each individual problem range from 5 to 50. We provide two splits for each dataset: Dev and Test. The Dev split’s all environments, answers, and checking scripts are public, while the Test is kept. We also carefully balance the evaluation comprehensiveness and efficiency in AGENTBENCH design, as LLMs’ multi-turn interaction can be time-consuming. We set the size of Dev and Test to 269 and 1,091, respectively, resulting in around 4k and 13k calls for inference, approximately the identical amounts of calls for inference as MMLU (Hendrycks et al., 2021b) requires. LLMs to Evaluate. As a systematic attempt to benchmark existing LLMs on LLM-as-Agent, we include in total 27 models for evaluation, which could be roughly classified into two categories: • API-based Commercial LLMs: mainly consist of LLM APIs without disclosed parameter amounts (Cf. Table 1). Due to more investments, their performances are usually better.
2308.03688#27
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
28
Fig.3. Relationships of five filtered parameters and four indicators. We aim to investigate the precise trend of the impact of different parameter settings on the opinion network, so we divide the values of each parameter into 11 cases, i.e., the possible values of 𝜀, pro_NIN, pro_NINL, and pro_NIL are [0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8, 0.9,1] and the sum of the proportions of the three agents is 1, the possible values of 𝑥𝐿𝐿𝑀 take the values [-1,-0.8,-0.6,- 0.4,-0.2,0,0.2,0.4,0.6,0.8,1], and there are a total of 7986 combinations of parameters for the opinion dynamics after traversing all cases. We then calculate the values of the four indicators (NODEdiff, NODEconv, NODESD, and NODEclus) for each combination of agents with three different use strategies and for all agents in the network, the detailed description and computation process of each indicator is described in the
2308.03313#28
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
28
This observation reinforces the notion that a well-structured and informative prompt can efficiently guide AI agents to understand the core essence of the problem, thereby enabling them to sift through irrelevant information and focus on key tasks. This successful discrimination against unrelated tools also points towards the models’ ability to understand the specific context of a problem and select the appropriate tools, thereby enhancing the overall problem-solving process. # 3.2.5 TPTU-SA: The Planning of Tool-Subtask Pair Generation Upon identifying the drawbacks of first generating a list of tools and then generating corresponding subtask descriptions, we decided to focus subsequent tests on the generation of tool-subtask pairs. 10 Consequently, in this section, we evaluate the capability of TPTU-SA to generate these tool-subtask pairs. To achieve the goal of recursively generating tool-subtask pairs, we have designed prompts as illustrated in Figure 12 of Appendix B. Table 7: The evaluation results for the planning of Tool-Subtask with the sequential agent. Model Accuracy Model Accuracy ChatGPT 80% Claude 100% ChatGLM Chinese-Alpaca-Plus 0% 0% Ziya 10% InternLM 65%
2308.03427#28
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
28
Evoked Emotion Measure Provided with certain situations, LLMs and human subjects are re- quired to re-complete the emotion measures. The procedure remains the same with the Default Emotion Measure stage. After obtaining the “Evoked” scores of emotions, we conduct a com- parative analysis of the means before and after exposure to the situations, thereby measuring the emotional changes caused by the situations. 8 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench 3.3 OBTAINING HUMAN RESULTS Goal and Design Human reference plays a pivotal role in the advancement of LLMs, facilitat- ing its alignment with human behaviors (Binz & Schulz, 2023). In this paper, we propose requir- ing LLMs to align with human behavior, particularly concerning emotion appraisal accurately. To achieve this, we conduct a data collection process involving human subjects, following the proce- dure outlined in §3.2. Specifically, the subjects are asked to complete the PANAS initially. Next, they are presented with specific situations and prompted to imagine themselves as the protagonists in those situations. Finally, they are again asked to reevaluate their emotional states using the PANAS. We use the same situation descriptions as those presented to the LLMs.
2308.03656#28
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
28
• API-based Commercial LLMs: mainly consist of LLM APIs without disclosed parameter amounts (Cf. Table 1). Due to more investments, their performances are usually better. • Open-sourced (OSS) LLMs: mostly come from the academia and some companies (Cf. Table 1). Due to limited computing resources, we only include OSS LLMs smaller than 70B here. Toolkit: Streamlining LLM Evaluation with API-Centric Approach and Environment Isolation. As Language Model (LLM) systems continue to advance in complexity and are primarily accessible through APIs, we have developed an evaluation toolkit that aligns with the API-oriented philosophy. This toolkit is meticulously designed to interact with APIs, simplifying the process of adapting and testing different LLMs. Researchers interested in evaluating their LLMs on AGENTBENCH only need to set up a model server accessible via the HTTP protocol.
2308.03688#28
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
29
for each combination of agents with three different use strategies and for all agents in the network, the detailed description and computation process of each indicator is described in the methods section. To eliminate randomness of our results, we repeat the simulation 100 times for each combination, and the final indicator values are the average of the results of these 100 times. (A) Pearson correlation coefficients of 5 filtered parameters and 4 indicators for the agents with different usage strategies. The Pearson correlation coefficients takes values in the range [-1, 1], with a value of 0 indicating that the two variables are not correlated, a positive value indicating a positive correlation, and a negative value indicating a negative correlation, the different colors of the squares in the figure represent different values of the Pearson correlation coefficients, and the legend is on the right, Since the value of the NIL does not change, resulting in the values of its four indicators not changing either, none of their Pearson correlation coefficients with the five parameter values exist. We also conducted t-test to test the degree of significance of the Pearson correlation coefficients, * denotes the degree of significance, i.e., the P-value, ***𝑃<0.001,
2308.03313#29
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
29
Model Accuracy Model Accuracy ChatGPT 80% Claude 100% ChatGLM Chinese-Alpaca-Plus 0% 0% Ziya 10% InternLM 65% The evaluation results are shown in Table 7. Compared with results shown in Table 5, TPTU-SA generally performs better than TPTU-OA especially for high–performing LLMs (e.g., ChatGPT, Claude and InternLM). We propose the following potential reasons for this observation: 1. Sequentiality Mimics Human Problem-Solving: In real-world scenarios, humans tend to solve complex problems by breaking them down into smaller, manageable subtasks which are often handled sequentially. Sequential agents are designed to mimic this step-by-step approach, which might inherently suit complex problem-solving better. 2. Richer Contextual Understanding: Sequential agents are exposed to the outcome of each previous subtask before moving on to the next one. This iterative process could facilitate a richer understanding of the problem context, enabling more accurate task planning and tool usage.
2308.03427#29
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
29
Crowd-sourcing Our questionnaire is distributed on Qualtrics8, a platform known for its capa- bilities in designing, sharing, and collecting questionnaires. To recruit human subjects, we utilize Prolific9, a platform designed explicitly for task posting and worker recruitment. To attain a medium level of effect size with Cohen’s d = 0.5, a significance level of α = 0.05, and a power of test of 1 − β = 0.8, a minimum of 34 responses is deemed necessary for each factor. To ensure this thresh- old, we select five situations10 for each factor, and collect at least seven responses for each situation, resulting in 5 × 7 = 35 responses per factor, thereby guaranteeing the statistical validity of our survey. In order to uphold the quality and reliability of the data collected, we recruit crowd workers who met the following criteria: (1) English being their first and fluent language, and (2) being free of any ongoing mental illness. Since responses formed during subjects’ first impressions are more likely to yield genuine and authentic answers, we set the estimated and recommended completion time at 2.5 minutes. As an incentive for their participation, each worker is rewarded with 0.3£ after we verify the validity of their response. In total, we successfully collect 1,266 responses from crowd workers residing in various parts of the world, contributing to the breadth and diversity of our dataset.
2308.03656#29
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
29
Moreover, dealing with diverse and intricate interaction environments poses a significant challenge. Uniformly configuring all these environments can be arduous and may lead to conflicts. To address this, we have implemented two key strategies. Firstly, we encapsulate tasks with complex envi- ronments into Docker images. Researchers can effortlessly utilize these images by mounting the code path and initiating the evaluation process with ease. Secondly, we have subdivided each task into separate workers, ensuring that the environments of these tasks remain isolated and free from conflicts. (Refer to Appendix A for further details.) Evaluation Prompt Setup. To accommodate the majority of existing dialogue models, our dialogue paradigm is structured around two roles, user (i.e., instruction & environment feedback) and agent, engaging and alternating with one another. We record interaction trajectories as a conversation history (u0, a0, · · · , uk, ak) involving the user and agent, where ui, ai represents the i-th round of the conversation history. When we perform inference, the conversation history must be like 6 Technical Report (v0.2)
2308.03688#29
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
30
significance of the Pearson correlation coefficients, * denotes the degree of significance, i.e., the P-value, ***𝑃<0.001, **𝑃<0.01, *𝑃 < 0. 05, ns means 𝑃 ≥ 0. 05, nan means none P value exist. In order to derive detailed trends that were not available from the correlation analysis, we then use the benchmark scenario in Fig.2, and keep changing the value of only one of the parameters at a time, and repeat the simulation 100 times to plot detailed trend of the indicators with respect to the parameters, some additional findings we obtained are shown in (B), (C), (D) and (E). To obtain the optimal combination of parameters for different use and intervention strategies of LLMs, we computed the average values
2308.03313#30
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
30
3. Flexibility in Task Management: In comparison to one-step agents, sequential agents might have more flexibility in managing tasks. They have the opportunity to correct errors or adjust their strategy after each step, which can lead to improved overall performance. 4. Improved Learning From History: The sequential process provides a history of actions and results which can be beneficial in learning. The agent can use this history to make better predictions about what tool to use next or what subtask to tackle, leading to more accurate and efficient problem-solving. These points of analysis suggest that the structure and operation of sequential agents inherently confer certain advantages in complex problem-solving scenarios, leading to their superior performance. # 3.3 Evaluation on Tool Usage Ability Before evaluating the end-to-end multi-tool usage ability of LLM-based AI agents, we first evaluate the effectiveness of single-tool usage for SQL generation and mathematical code generation. Subsequently, to assess the end-to-end performance of LLMs across various tools, two types of agents (TPTU-OA and TPTU-SA) were developed and several LLMs were subjected to testing under these agents. The role of the agents is to break down complex questions into simpler sub-questions and plan corresponding tools to solve them, based on the available toolset and corresponding tool descriptions. # 3.3.1 The effectiveness of Single Tool Usage
2308.03427#30
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
30
# 4 EXPERIMENTAL RESULTS Leveraging the testing framework designed and implemented in §3.2, we are now able to explore and answer the following Research Questions (RQs): • RQ1: How do different LLMs respond to specific situations? Additionally, to what degree do the current LLMs align with human behaviors? • RQ2: Do LLMs respond similarly towards all situations? What is the result of using positive or neutral situations? • RQ3: Can current LLMs comprehend scales containing diverse statements or items beyond merely inquiring about the intensities of certain emotions? 4.1 RQ1: EMOTION APPRAISAL OF LLMS
2308.03656#30
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03313
31
9 / 21 of the five parameters when the minimum or maximum values are obtained for the four indicators, respectively. Which are shown in (F), (G), (H) and (I). In order to make the results more robust, when different combinations of parameters reach extreme values at the same time, we select all combinations of parameters that reach extreme values and calculate their average values, and when this does not happen, we select the top ten combinations of parameters to calculate the average value. The minimum value of the opinion clusters in (I) is 1, which represents the consensus of opinions, and we additionally show the case where the mean value of the opinion clusters is 2, which represents the polarization of opinions in reality.
2308.03313#31
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
31
# 3.3.1 The effectiveness of Single Tool Usage Our aim is to systematically assess how effectively these models can use various tools, focusing on their proficiency with SQL and other coding languages. The Effectiveness of simple SQL Creation Using the schemas provided in Table 12 and Table 13, we construct questions similar to those in Table 14, and refer readers to Appendix A. These questions are posed to various LLMs using our specifically designed prompts in Appendix B. Following the tailored prompts, the LLMs are evaluated based on their responses to the presented queries. The results of this comprehensive assessment are compiled and exhibited in Figure 8. This verifies the capabilities of each LLM in handling varying simple single-table SQL queries, thus providing a basis for comparison and analysis. 11 # Table 8: The evaluation results for simple SQL questions. Model Accuracy Model Accuracy ChatGPT 90% Claude 100% ChatGLM Chinese-Alpaca-Plus 30% 20% Ziya 50% InternLM 90%
2308.03427#31
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
31
4.1 RQ1: EMOTION APPRAISAL OF LLMS Model Settings We select namely text-davinci-003, gpt-3.5-turbo and gpt-4. Utilizing the official OpenAI API12, we set the temperature parameter to zero to obtain more deterministic and reproducible results. For the recent open-sourced LLaMA-2 (Touvron et al., 2023) models from MetaAI, we select two mod- els with different sizes (7B and 13B). Checkpoints are downloaded from the official Hugging Face website for both 7B (Llama-2-7b-chat-hf13) and 13B (Llama-2-13b-chat-hf14) mod- els. We choose the models fine-tuned for dialogue instead of pre-trained ones. In order to ensure 8https://www.qualtrics.com/ 9https://prolific.co/ 10Note that two factors in the Jealousy category did not have five situations. For further details, please refer to the dataset.
2308.03656#31
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
31
LLM Type Models VER OA Code-grounded Game-grounded Web-grounded OS DB KG DCG LTP HH WS WB API gpt-4 claude-2 claude gpt-3.5-turbo text-davinci-003 claude-instant chat-bison-001 text-davinci-002 0613 - v1.3 0613 - v1.1 - - 4.01 2.49 2.44 2.32 1.71 1.60 1.39 1.25 42.4 18.1 9.7 32.6 20.1 16.7 9.7 8.3 32.0 27.3 22.0 36.7 16.3 18.0 19.7 16.7 58.8 41.3 38.9 25.9 34.9 20.8 23.0 41.5 74.5 55.5 40.9 33.7 3.0 5.9 16.6 11.8 16.6 8.4 8.2 10.5 7.1 12.6 4.4 0.5 78.0 54.0 58.0 16.0 20.0 30.0 18.0 16.0 61.1 61.4 55.7 64.1 61.7 49.7 60.5 56.3 29.0 0.0 25.0 20.0
2308.03688#31
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
32
Countered solutions and intervention strategies to hazards of LLMs. Fig.4A shows that randomly adding agents with opposite, neutral, or random opinions significantly improves scenarios with negative LLMs’ values, such as bias. There is no significant difference between these three approaches. However, the final opinion distribution for the approach introducing opposite values spans a larger range than the remaining two approaches, with the minimum value still larger than the original scenario, and the maximum value a large improvement over the original scenario. This measure is more suitable for scenarios that require a complete and rapid reversal of opinion, such as correcting bias as opposed to a certain established fact. Approaches that introduce neutral and random agents have the smallest final opinion distribution span and significantly improve the minimum value of opinion, but not the maximum value. These two approaches are more robust and more suitable for scenarios that require slow changes in population misperceptions, such as biases against race, poverty, and disability.
2308.03313#32
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
32
Model Accuracy Model Accuracy ChatGPT 90% Claude 100% ChatGLM Chinese-Alpaca-Plus 30% 20% Ziya 50% InternLM 90% The Effectiveness of Complex Nested SQL Creation Using the schemas provided in Ta- ble 15, 16, 17, and 18, we construct questions similar to those in Table 19, and refer readers to Appendix A. For complex nested SQL questions, to further verify the SQL tool creation capability of LLMs, we have designed two types of prompts. One is the direct-guidance type, which explicitly informs the model that it needs to generate nested SQL query statements, as shown in Figure 14 in Appendix B. The other is based on the Chain-of-Thought (CoT) [26] approach, which leverages the model’s ability to reason step by step to comprehend and craft SQL tools, and the prompt is shown in Figure 15 in Appendix B. This method guides the model to sequentially generate SQL query clauses based on the problem context, thus breaking down the complex query generation task into smaller and manageable subtasks. This approach provides the model with a structured way to handle complex SQL tasks and showcases its capacity to engage in incremental reasoning and problem-solving.
2308.03427#32
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
32
to the dataset. # 11https://platform.openai.com/docs/models 12https://platform.openai.com/docs/api-reference/chat 13https://huggingface.co/meta-llama/Llama-2-7b-chat-hf 14https://huggingface.co/meta-llama/Llama-2-13b-chat-hf 9 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench Table 3: Results from the OpenAI GPT family and human subjects. Default scores are expressed in the format of M ± SD. The changes are compared to the default scores. The symbol “−” denotes no significant differences.
2308.03656#32
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
32
16.0 61.1 61.4 55.7 64.1 61.7 49.7 60.5 56.3 29.0 0.0 25.0 20.0 26.0 4.0 12.0 9.0 OSS (Large) llama-2-70b guanaco-65b chat - 0.78 0.54 9.7 8.3 13.0 14.7 8.0 1.9 21.3 0.1 0.0 1.5 2.0 12.0 5.6 0.9 19.0 10.0 codellama-34b vicuna-33b wizardlm-30b guanaco-33b instruct v1.3 v1.0 - 0.96 0.73 0.46 0.39 2.8 15.3 13.9 11.1 14.0 11.0 12.7 9.3 23.5 1.2 2.9 3.2 8.4 16.3 0.3 0.3 0.7 1.0 1.8 0.0 4.0 6.0 6.0 6.0 52.1 23.9 4.4 6.2 20.0 7.0 1.0 5.0 OSS (Small) vicuna-13b llama-2-13b openchat-13b wizardlm-13b
2308.03688#32
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
33
In Fig.4B, we observe that the standard deviation of individuals in the group who are all partially dependent (0.370) on LLMs is significantly larger in the extreme scenario than in the non- dependent (0.267) and fully dependent scenarios (0). In Fig.4C, we observe that the number of opinion clusters in the group that are all non-dependent or partially dependent on LLMs is significantly larger in the extreme scenario than in the fully dependent scenario. These findings further confirm that LLMs can be effective in increasing opinion diversity if used appropriately, whereas complete reliance on LLM leads to a stronger collective consensus.
2308.03313#33
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
33
The design of these two types of prompts serves as the backbone of our evaluation for complex nested SQL questions. While the direct-guidance approach focuses on testing the model’s raw ability to generate SQL queries when explicitly instructed, the CoT-based approach evaluates a more nuanced capability: the model’s reasoning and problem-solving skills in a step-by-step manner. Both these methods present unique challenges and offer valuable insights into the strengths and potential areas of improvement for the large language model’s SQL tool generation ability. Subsequently, we will explore these two dimensions based on our experimental evaluations shown in Table 9. Table 9: The evaluation results for complex nested SQL questions. Model Direct-based CoT-based ChatGPT 80% 80% Claude 100% 100% Ziya 50% 40% Model Direct-based CoT-based ChatGLM Chinese-Alpaca-Plus 60% 70% 0% 0% InternLM 60% 50% From the above results in Table 9, it is clear that different models possess varying levels of proficiency in handling complex nested SQL tasks. Some models, like Claude, exhibit a robust capability in SQL generation, no matter whether the approach is direct or CoT-based. Most of these models demonstrate the SQL tool usage capability.
2308.03427#33
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
33
Emotions Factors text-davinci-003 gpt-3.5-turbo gpt-4 Crowd Anger Anxiety Depression Frustration Jealousy Guilt Fear Default Facing Self-Opinioned People Blaming, Slandering, and Tattling Bullying, Teasing, Insulting, and Disparaging Silly and Thoughtless Behaviors Driving Situations Anger: Average External Factors Self-Imposed Pressure Personal Growth and Relationships Uncertainty and Unknowns Anxiety: Average Failure of Important Goal Death of Loved Ones Romantic Loss Chronic Stress Social Isolation Winter Depression: Average Disappointments and Letdowns Unforeseen Obstacles and Accidents Miscommunications and Misunderstanding Rejection and Interpersonal Issues Frustration: Average Romantic (Opposite Gender) Romantic (Same Gender) Material Possession Experiential Jealousy: Average Betrayal and Deception Relationship and Interpersonal Broken Promises and Responsibilities Personal and Moral Guilt: Average Social Fears Agoraphobia Fears Injury Fears Dangerous Environments Harmless Animals Fear: Average Intimate Stranger Sticky situations Centre of Attention Embarrassment: Average Overall: Average P 47.7±1.8 ↓ (-18.3) ↓ (-21.5) ↓
2308.03656#33
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
33
20.0 7.0 1.0 5.0 OSS (Small) vicuna-13b llama-2-13b openchat-13b wizardlm-13b vicuna-7b codellama-13b codellama-7b koala-13b llama-2-7b codegeex2-6b dolly-12b chatglm-6b oasst-12b v1.5 chat v3.2 v1.2 v1.5 instruct instruct - chat - v2 v1.1 sft-4 0.93 0.77 0.70 0.66 0.56 0.56 0.50 0.34 0.34 0.27 0.14 0.11 0.03 10.4 4.2 15.3 9.0 9.7 3.5 4.9 3.5 4.2 1.4 0.0 4.9 1.4 6.7 11.7 12.3 12.7 8.7 9.7 12.7 5.0 8.0 0.0 0.0 0.3 0.0 9.4 3.6 5.5 1.7 2.5 10.4 8.2 0.4 2.1 4.8 0.0 0.0 0.0 0.1 26.4 0.1 1.9 0.3
2308.03688#33
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
34
Fig.4D illustrates different intervention strategies for using LLMs in different contexts with specific objectives of collective opinion difference, convergence, and fragmentation. Overall, we find that when our desired collective opinion and LLMs’ output values are opposite, the most effective method for human intervention is to randomly introduce three types of agents. When our desired collective opinion and LLMs’ output values are not opposite, the most effective method to change the collective opinions is to vary the proportion of people who use different strategies. Increasing the proportion of NINL, NIN and the threshold of the population can effectively intensify the interactions of collective opinions. Increasing the proportion of NINL can effectively diversify collective opinion. There are many specific implementation options for each intervention method in Fig.4D, and we provide some common options based on proven psychological and sociological knowledge in the Conclusion section. In summary, the experiments and results in this section provide valuable insights into the effective use of LLMs in shaping collective opinion formation and convergence. The findings highlight the importance of the appropriate use of LLMs in promoting opinion diversity and fragmentation-oriented opinion formation. These results have important implications for the design and implementation of effective interventions aimed at promoting the positive development of opinion networks in various contexts. 10 / 21
2308.03313#34
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
34
Specifically, some models such as ChatGLM show a distinct preference for the CoT-based approach, their performance improves when problems are broken down into smaller, manageable sub-tasks. This suggests that these models may have a stronger ability in sequential problem-solving and benefit more from step-by-step guidance. Conversely, models like Ziya and InternLM show a drop in performance when tasks are guided in the CoT-based format. This might indicate challenges in managing dependencies between sub-tasks or handling the continuity in sequential problem-solving. Lastly, Chinese-Alpaca-Plus shows significant room for improvement in complex SQL generation tasks. This shows that not all models are equally suited to handle advanced problem-solving involving nested SQL queries. Overall, these findings underscore the importance of tailoring evaluation and training methodologies to the individual strengths and weaknesses of each model. By adopting this approach, we can better understand the performance variations across different models and provide targeted improvements to enhance their problem-solving abilities. Furthermore, this analysis highlights the potential of 12 LLM-based agents in real-world applications, and the need to push their boundaries through continued research and development.
2308.03427#34
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
34
situations Centre of Attention Embarrassment: Average Overall: Average P 47.7±1.8 ↓ (-18.3) ↓ (-21.5) ↓ (-22.5) ↓ (-24.8) ↓ (-21.2) ↓ (-21.7) ↓ (-21.7) ↓ (-14.6) ↓ (-18.5) ↓ (-15.5) ↓ (-17.6) ↓ (-25.2) ↓ (-23.6) ↓ (-27.3) ↓ (-28.8) ↓ (-27.9) ↓ (-25.4) ↓ (-26.4) ↓ (-27.2) ↓ (-22.4) ↓ (-21.2) ↓ (-20.5) ↓ (-22.8) ↓ (-22.4) ↓ (-20.1) ↓ (-4.4) ↓ (-12.2) ↓ (-17.2) ↓ (-18.2) ↓ (-27.7) ↓ (-26.4) ↓ (-13.3) ↓ (-21.4) ↓ (-21.2) ↓ (-25.3) ↓ (-24.3) ↓ (-20.9) ↓ (-21.6) ↓ (-22.7) ↓
2308.03656#34
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03313
35
10 / 21 RK NIL_random RK al NIL_neutral soi NIL_opposite 0.55 15 NIL_origin = c ns NINL_random 5 04 ¢ = 2 | NINL_nuetral x |* BS $3 10 * o 0.3 nc} NINL_opposite * * s Pad NINL_origin * Pz ° NIN_random 3 02 B 5 NIN_nuetral * x 5 0.1 E NIN_opposite zs oa” c NIN_origin * 0.0 0 15 4.0 05 0.0 05 1.0 OS OS inion di Sv Ss SSS mean opinion difference o © o o > $s 7 $s SS O7 s ge e¢ g ev ¥ g +) XLLM collective opinion ag convergence collective opinion difference XLLM > a 0 Qe = collective opinion @a fragmentation Ort orn Wee @ >0 @ =0 © <0 @ greater the value @smaller the value “increase proportion of NIN @increase proportion of NINL @ increase proportion of NIL @)add opposite nodes (S)add random nodes (add neutral nodes _€ increase threshold OOOO OO D) ©) e@ 2
2308.03313#35
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
35
12 LLM-based agents in real-world applications, and the need to push their boundaries through continued research and development. The Effectiveness of Mathematical Code Creation Following our evaluation of the LLM’s profi- ciency in creating complex SQL queries, we now shift our focus to another tool creation: the creation of mathematical code. To the best of our knowledge, while large language models possess significant capabilities, they often fall short of providing highly accurate solutions to mathematical problems. Guiding these LLMs to generate mathematical code, and subsequently leveraging external tools to execute and derive the solutions, could significantly enhance their ability to tackle mathematical challenges. In the upcoming section, we will conduct a detailed evaluation of guiding these LLMs to generate mathematical code. We aim to shed light on the true capability of these models in generating mathematical code and to elucidate the extent to which they can be utilized to aid in mathematical problem-solving. The prompt about how to guide LLMs is shown in Figure 16 in Appendix B. # Table 10: The evaluation results for mathematical questions. Model Accuracy Model Accuracy ChatGPT 90% Claude 85% ChatGLM Chinese-Alpaca-Plus 0% 55% Ziya 50% InternLM 95%
2308.03427#35
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
35
↓ (-25.3) ↓ (-24.3) ↓ (-20.9) ↓ (-21.6) ↓ (-22.7) ↓ (-15.1) ↓ (-21.7) ↓ (-17.2) ↓ (-18.7) ↓ (-18.2) ↓ (-21.5) N 25.9±4.0 ↑ (+14.0) ↑ (+16.5) ↑ (+15.4) ↑ (+11.7) ↑ (+10.2) ↑ (+13.6) ↑ (+12.6) ↑ (+5.6) ↑ (+7.7) ↑ (+4.6) ↑ (+7.6) ↑ (+17.4) ↑ (+11.2) ↑ (+14.0) ↑ (+16.5) ↑ (+13.1) ↑ (+9.1) ↑ (+13.6) ↑ (+10.9) ↑ (+13.6) ↑ (+11.5) ↑ (+14.1) ↑ (+12.5) ↑ (+16.4) ↑ (+12.7) ↓ (-9.7) − (-4.8) ↑ (+7.5) ↑ (+15.4) ↑ (+15.3) ↑
2308.03656#35
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
35
(u0, a0, · · · , uk). We select the minimum r such that count of all tokens2 in (u0, ar, ur+1, · · · , uk) is not greater than 3500. And then we append "[NOTICE] 2r messages are omitted." into u0. After that, the sequence (u0, ar, ur+1, · · · , uk) is regarded as the final input in multi-turn chat format. However, in order to consider non-chat models, we append a post-processor. We feed the history into the model for chat models supporting multiple turns. For models supporting only text completion (e.g., text-davinci-003), we prepend "USER:" or "AGENT:" into each item in the history and finally append the string "AGENT:" to make models generate the agent’s content. For task prompt organization, we adapted the format from (Yao et al., 2023b) to include both “Thought” (for CoT) and “Action” but in one single turn. Usually, a simple CoT demonstration is provided in the task instruction for a better output format. To ensure reproducible results, we set temperature=0 (i.e., greedy decoding) in the inference on all tasks following (Wei et al., 2022b).
2308.03688#35
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
36
Fig.4. Countered solutions and intervention strategies to hazards of LLMs. (A) The distribution of the mean opinion difference of three categories of agents in original scenario and 3 countered scenarios. In the context of direct or implicit bias and toxicity of LLM, how to intervene reasonably and effectively in the opinion network is important to eliminate social discrimination and verbal violence, etc. According to Fig.3A, we found that only the 𝑥𝐿𝐿𝑀 has a significant relationship with the NODEdiff, therefore, in the existing framework, there is a lack of means to address the change in the opinion network caused by the output opinion values of LLM. To address this issue, three attempts were made. Specifically, we first select all NODEdiff values (N=726) when the 𝑥𝐿𝐿𝑀 is -1 as the resultant values of the original dynamic network here, which were calculated in Fig.3A. We then introduce three countered solutions, which are 1) agents of opposite opinions (here i.e. 1); 2) agents of neutral opinions (here i.e. 0 ); and 3) agents of random opinions (with the value
2308.03313#36
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
36
Model Accuracy Model Accuracy ChatGPT 90% Claude 85% ChatGLM Chinese-Alpaca-Plus 0% 55% Ziya 50% InternLM 95% The results shown in Table 10 indicate that the capabilities of LLM-based agents to generate math- ematical code vary considerably. High-performing models like ChatGPT, Claude, and InternLM display excellent proficiency, suggesting their potent ability to solve complex mathematical tasks. Middle-tier models, such as Ziya, show moderate success, indicating the potential for improvement and adaptability with the right training and optimization. Surprisingly, Alpaca demonstrated a notable proficiency in mathematical tasks, despite its poor performance in SQL generation, suggesting a possible inclination towards mathematical problems. In contrast, ChatGLM struggles significantly with mathematical code generation, underlining a potential weak spot in its capabilities and the need for focused improvement in this area. Overall, these results underscore the task-dependent nature of LLMs’ capabilities and highlight the importance of recognizing their individual strengths and weaknesses for optimal model guidance and enhanced problem-solving. # 3.3.2 TPTU-OA and TPTU-SA: Tool Usage for Multiple Tools We now aim to utilize the one-step agent and sequential agent, which we designed, to conduct an evaluation involving multiple tools. Corresponding prompts for each agent type have been crafted and are presented in Figure 17 and Figure 18 of Appendix B, respectively.
2308.03427#36
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
36
↓ (-9.7) − (-4.8) ↑ (+7.5) ↑ (+15.4) ↑ (+15.3) ↑ (+14.0) ↑ (+12.4) ↑ (+14.3) ↑ (+13.3) ↑ (+11.2) ↑ (+10.0) ↑ (+15.6) ↑ (+6.7) ↑ (+11.4) − (+2.8) ↑ (+13.2) ↑ (+10.7) ↑ (+12.4) ↑ (+9.8) ↑ (+11.6) P 39.2±2.3 ↓ (-11.1) ↓ (-15.2) ↓ (-15.7) ↓ (-19.0) ↓ (-15.0) ↓ (-15.2) ↓ (-14.6) ↓ (-6.9) ↓ (-11.7) ↓ (-11.9) ↓ (-11.3) ↓ (-17.1) ↓ (-17.1) ↓ (-21.1) ↓ (-20.2) ↓ (-23.5) ↓ (-21.1) ↓ (-20.1) ↓ (-18.3) ↓ (-16.5) ↓ (-15.9) ↓
2308.03656#36
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
36
Overall Score Calculation. We have observed that the score distribution for each task varies significantly as tasks differ in difficulty levels. As a consequence, a naively averaged score is heavily impacted by tasks that generally yield higher scores (e.g., Web Shopping in our observation), overshadowing those with lower scores and being unsuitable for AGENTBENCH’s purpose. Therefore, we produce the overall score by first resizing each task’s average score to 1 across all the models we evaluate and then averaging the scores across all tasks for each model (Cf. Table 2). To standardize and simplify score calculations for future studies, we utilize the reciprocal average score of all the tested LLMs in each task as a fixed weight for future overall score calculation. The total score is then computed as the average value obtained by multiplying the score of each task by its corresponding weight. This method ensures fairness and consistency in evaluation, enabling easier comparisons and analysis in future research. 2Because the tokenizers of each model is different, we simply calculate tokens like this: a word with length n occupies ⌈n/6⌉ token(s), and a non-blank character takes 1 token. 7 Technical Report (v0.2) # OS DB KG DCG LTP HH WS WB # Completed = 75.0 37.9 30.1 51.2 14.0 13.1 54.9 56.6
2308.03688#36
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
37
opinions (here i.e. 1); 2) agents of neutral opinions (here i.e. 0 ); and 3) agents of random opinions (with the value domain [-1,1]) added randomly in iterations, and once they were added, they all keep the initial opinion values, the probability of adding agents at iteration t is all 0.1, the number of agents added is all 2, and the LLM value is fixed at -1, too, all other possible parameter combinations (N=726) are traversed, and each combination of parameters is simulated 100 times, the average value is taken as the final result corresponding to each parameter combination. The distribution of the mean opinion difference were plotted using a box plot, with the five vertical lines from left to right indicating the minimum, lower quartile, median, upper quartile and maximum of the data, respectively, We also conducted one-way ANOVA analysis to investigate whether the difference between the two groups of data was significant, we only show here the p-value less than 0.05, * denotes the degree of significance, i.e., the P-value, ****𝑃<0.0001, ***𝑃<0.001, **𝑃<0.01,
2308.03313#37
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
37
In this phase of the evaluation, we need to automatically invoke the respective tools through code and produce the results. Given that user interface-based LLMs lack the capability to call external tools, we will only utilize the following four API-based LLMs (ChatGPT, Ziya, Chinese-Alpaca, and InternLM) for this comprehensive evaluation of external tool usage ability. # Table 11: The evaluation results for end-to-end ability of multiple tools. Model TPTU-OA TPTU-SA ChatGPT Ziya Chinese-Alpaca-Plus 50% 55% 0% 0% 0% 0% InternLM 15% 20% With agents mentioned above, the final results are presented in Table 11. The evaluation results demonstrate varying levels of task planning and tool usage capabilities among the four API-based LLMs. In the TPTU-OA evaluation, ChatGPT achieved a performance rate of 50%, significantly outperforming the other models, with InternLM at 15%, while both Ziya and Chinese-Alpaca did not manage to complete any tasks successfully, resulting in a score of 0%. In the TPTU-SA evaluation, 13 an overall slight improvement was observed. ChatGPT maintained its leading position, with a slightly improved performance rate of 55%. InternLM also exhibited better performance, achieving a score of 20%, whereas Ziya and Chinese-Alpaca-Plus again failed to register any successful task completion.
2308.03427#37
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
37
↓ (-21.1) ↓ (-20.1) ↓ (-18.3) ↓ (-16.5) ↓ (-15.9) ↓ (-14.9) ↓ (-16.4) ↓ (-18.4) ↓ (-17.8) ↓ (-4.6) ↓ (-13.2) ↓ (-15.3) ↓ (-15.5) ↓ (-18.4) ↓ (-18.6) ↓ (-10.7) ↓ (-15.8) ↓ (-11.3) ↓ (-16.1) ↓ (-14.5) ↓ (-14.3) ↓ (-15.3) ↓ (-14.3) ↓ (-12.4) ↓ (-15.3) ↓ (-11.8) ↓ (-12.4) ↓ (-13.0) ↓ (-15.4) N 26.3±2.0 ↓ (-3.9) − (-2.1) ↑ (+4.4) ↓ (-4.7) ↓ (-6.0) ↓ (-2.5) ↑ (+2.8) − (-0.2) ↓ (-2.5) ↓ (-3.8) − (-0.9) ↑ (+6.5) − (1.8) ↑
2308.03656#37
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
37
# OS DB KG DCG LTP HH WS WB # Completed = 75.0 37.9 30.1 51.2 14.0 13.1 54.9 56.6 0.1 0.7 2.0 0.0 3.5 0.7 0.0 0.0 CLE Invalid Format 0.0 53.3 0.0 38.5 0.0 0.0 17.2 0.0 Invalid Action 0.9 0.0 0.0 10.2 0.0 64.1 0.0 8.4 23.9 8.0 67.9 0.0 82.5 22.1 27.8 35.0 TLE ° vicutt-13b codelidtha-34b 08 llama-2-13b openchat 13bp- vicuesap — '3ma-2-74b wizardim-13b “= vicuna-7b [fj codellama-13b © Eicodellama-7b juanaco.bsb wizardim-30b @ guanaco-33b © a © ES AgentBench OA score o& koala-13b 5 Adare on ; © dolly-12b 004 TA sleaze 67 65 13 33 #Size (bil t ize (billion parameters)
2308.03688#37
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03427
38
These results reflect a notable discrepancy in the performance of LLMs when it comes to using external tools. ChatGPT and InternLM have demonstrated some ability to navigate these tasks, but their performance rates suggest there is significant room for improvement. Ziya and Chinese- Alpaca-Plus’ performance indicates a struggle to effectively utilize external tools in their current state. The differential performance between the TPTU-OA and TPTU-SA evaluation hints at the possible impact of the agent design on the LLMs’ task execution ability. In particular, the performance increase under the sequential agent framework suggests that breaking down tasks into sequential steps might help LLM-based AI agents better utilize external tools. This insight could prove valuable in future improvements and developments of LLM-based AI agents. However, even with this approach, it is clear that LLM-based AI agents are far from perfect when it comes to effectively using external tools for complex tasks. This finding underlines the importance of further investigation and improvement in this domain. # Insightful Observations Upon closer observation of our experimental results, we have identified several phenomena that deserved further exploration. These findings serve to broaden our understanding of LLM-based agents’ behavior and capabilities and provide essential insights that could shape future research in this field. In the following, we will dissect these phenomena as shown in Figure 4 - 7, casting light on the weaknesses of LLM-based agents in the context of task planning and tool usage.
2308.03427#38
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
38
− (-0.2) ↓ (-2.5) ↓ (-3.8) − (-0.9) ↑ (+6.5) − (1.8) ↑ (+3.1) ↑ (+9.3) − (+0.7) ↓ (-3.0) ↑ (+3.1) ↓ (-7.0) − (+0.1) ↓ (-3.6) ↓ (-2.4) ↓ (-3.2) − (+1.7) − (-1.3) ↓ (-11.6) ↓ (-8.9) ↓ (-3.2) ↑ (+4.6) ↑ (+3.0) ↑ (+2.8) − (+1.2) ↑ (+2.9) ↑ (+3.8) ↑ (+5.6) − (+0.0) ↑ (+4.3) − (-0.7) ↑ (+2.6) ↓ (-3.9) − (+0.1) ↑ (3.1) ↑ (+2.9) − (+0.6) − (+0.2) P 49.8±0.8 ↓ (-24.6) ↓ (-28.8) ↓ (-30.0) ↓ (-30.9) ↓
2308.03656#38
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
38
Table 4: Portions of different types of execution outcomes in 8 tasks. (CLE: Context Limit Exceeded, TLE: Task Limit Exceeded). Figure 3: AGENTBENCH OA scores with regard to all tested OSS LLMs. 4.2 MAIN RESULTS Overall and dataset-specific scores in AGENTBENCH are reported in Table 3. Surprisingly, on this challenging benchmark, we discover that some top LLMs are equipped with solid capabilities for dealing with real-world environmental interaction. For example, gpt-4 presents the best performance on 6 out of 8 datasets in AGENTBENCH; on HH, it achieves a success rate of 78%, indicating its practical usability in this scenario. claude-2 and claude follow gpt-4 but quite outperform gpt-3.5-turbo. Despite other API-based LLMs’ relatively poorer performance, regardless of tasks, most of them can solve quite a few percent of problems. All API-based LLMs have an AGENTBENCH overall score above 1.00.
2308.03688#38
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
39
standard deviation, the error bars show the 95% confidence interval of the statistical data, the test method for 11 / 21 difference significance is the same as A. (C) The distribution of the number of clusters with pro_NIN, pro_NINL, pro_NIL equal to 1, respectively. In three extreme usage strategies here, we select all NODEclus values (N=121) which were calculated in Fig.3A., the symbols in the figure are the same as in (B). (D) Appropriate intervention strategies of LLMs meeting different scenarios. We take the different needs in reality into account, and plot this diagram based on the correlation matrix and the results of the above three diagrams from three aspects: collective opinion difference, collective opinion convergence time and collective opinion fragmentation. # Discussion
2308.03313#39
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
39
1. Misunderstanding Output Formats: LLMs frequently encounter difficulty when output is required in specific formats such as lists or dictionaries. One such example includes incon- sistencies between the number of tools and corresponding subtasks, leading to formatting issues that hinder the correct execution of tasks. How many more concerts has Jay Chou held than Li Ronghao? Is this number bigger than the square root of 10? Tools: ["Python generator", "SQL generator"] LLMs Subtasks:["How many concerts did Jay Chou perform?", Srey "How many concerts did Li Ronghao perform?", = "How many more concerts did Jay Chou perform than Li Ronghao?", a "Is the number bigger than the square root of 10?"] Figure 4: Issue-1: Inconsistencies between the number of tools and corresponding subtasks. 2. Struggling to Grasp Task Requirements: LLMs might incorrectly disintegrate subprob- lems or apply unsuitable tools to carry out the subproblem. For example, an LLM might attempt to solve a purely mathematical problem by employing an SQL tool or could misun- derstand similar terms like cube extraction and cube roots. 3. Endless Extensions: LLMs tend to overutilize a particular tool, even in instances where a single use would suffice for the correct result. This issue can lead to extended and nonsensical planning, where the same subtask is repeatedly solved.
2308.03427#39
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
39
P 49.8±0.8 ↓ (-24.6) ↓ (-28.8) ↓ (-30.0) ↓ (-30.9) ↓ (-27.1) ↓ (-28.3) ↓ (-28.3) ↓ (-16.1) ↓ (-21.7) ↓ (-21.5) ↓ (-21.9) ↓ (-30.4) ↓ (-31.7) ↓ (-33.7) ↓ (-32.5) ↓ (-34.7) ↓ (-31.3) ↓ (-32.4) ↓ (-32.8) ↓ (-29.8) ↓ (-27.7) ↓ (-27.0) ↓ (-29.4) ↓ (-29.2) ↓ (-26.8) ↓ (-16.2) ↓ (-25.9) ↓ (-26.0) ↓ (-28.5) ↓ (-32.3) ↓ (-32.8) ↓ (-22.7) ↓ (-29.0) ↓ (-24.7) ↓ (-27.5) ↓ (-25.5) ↓ (-25.4) ↓ (-25.6) ↓ (-25.7) ↓ (-24.1) ↓ (-27.8) ↓
2308.03656#39
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
39
OSS LLMs, however, commonly fail to solve problems in some challenging tasks, such as KG, DCG, and HH. We plot their performance concerning their sizes in Figure 3. Generally, most open-sourced LLMs perform far poorer than API-based LLMs in AGENTBENCH (Avg. 0.51 v.s. 2.15). The most capable OSS LLM turns out to be codellama-34b, achieving an overall score of 0.96 but still presents a clear performance gap to gpt-3.5-turbo. This contrasts recent claims that some OSS LLMs are comparable to gpt-3.5-turbo and gpt-4. We still need much effort to produce stronger OSS LLMs to serve agent purposes. 4.3 ANALYSIS In the evaluation, we analyze some important factors that impact an LLM agent’s performance on AGENTBENCH, including outcome portion analysis, code training, and the difference between API-based commercial LLMs and OSS LLM competitors. More insights and case studies into the ability of planning, self-correction, and tool use are provided in Appendix J.2.
2308.03688#39
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
40
# Discussion In the past decades, the study of social opinion evolution has attracted much attention43,44. A number of opinion models have been proposed based on well-studied sociological and psychological principles45-49, such as the DeGroot model50, voter model51, Sznajd model52, Friedkin and Johnsen model53, and bounded confidence model54. A lot of effort has been put into understanding opinion dynamics on the traditional social interactions media era, but little research has been done on opinion models to incorporate the interaction patterns of LLMs. Unlike traditional media interaction, LLMs interact with users in both directions, and LLMs are more likely to output toxic and biased content25-27, so the potential impact of LLMs on opinions is not fully understood. Researchers have identified six specific risk areas of LLMs26, including the potential for implicit bias (e.g., gender, race, and country) and the risk of indulging in their use, which were highly relevant to opinion dynamics. Recent research also have confirmed that uncensored LLMs can significantly influence individual opinions38,41. On the basis of the above study, we conducted millions of simulations with a modified HK model to dig into the impact of LLMs on social opinion dynamics and propose different target-oriented interventions for the utilization of LLMs.
2308.03313#40
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
40
4. Lack of Summary Skills: LLMs do not take into account the responses to subproblems, relying instead on their internalized knowledge to generate the final answer. This may lead to a scenario where the final response only addresses a portion of the original query. By identifying and addressing these common issues, we stand a better chance at improving and refining LLMs, thereby unlocking their full potential. 14 How many singers have the average number of albums of singers in Beijing? Gives the square root of this number. Tools: ["SQL generator", "SQL generator", "SQL generator"] Subtasks:["What is the average number of albums by singers in Beijing?", "How many singers have the average number of albums by singers in Beijin | "What is the square root of this number?"] al Figure 5: Issue-2:Solve a purely mathematical problem by employing a SQL generator.
2308.03427#40
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
40
↓ (-25.4) ↓ (-25.6) ↓ (-25.7) ↓ (-24.1) ↓ (-27.8) ↓ (-23.5) ↓ (-25.4) ↓ (-25.2) ↓ (-27.6) P N 10.0±0.0 28.0±8.7 ↑ (+23.0) − (-5.3) ↑ (+24.2) ↓ (-2.2) ↑ (+22.6) − (-1.4) ↓ (-9.4) ↑ (+16.9) ↓ (-4.4) ↑ (+19.2) ↓ (-5.3) ↑ (+21.2) ↑ (+25.0) ↓ (-2.2) ↑ (+20.0) − (-5.3) ↑ (+18.2) − (-2.2) ↑ (+16.8) − (+0.7) ↓ (-2.2) ↑ (+20.0) ↓ (-6.8) ↑ (+29.8) ↓ (-7.4) ↑ (+17.6) ↓ (-7.2) ↑ (+22.9) ↓ (-9.5) ↑ (+31.6) ↑ (+21.8) ↓ (-9.0)
2308.03656#40
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
40
Portion of Different Types of Execution Outcomes. We report ratios of different types of execution outcomes (Cf. Section 2 for introduction) in Table 4. It is Task Limit Exceeded that dominantly caused the incompleteness of AGENTBENCH tasks. It means that despite the instruction following of most LLM agents, they fail to solve the challenge in given time or fall into repeated generation when interaction turns grow up, indicating weak reasoning and decision-making abilities.
2308.03688#40
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
41
Our results show that a broader cognitive acceptability leads to an eventual consensus of collective opinion, which is consistent with previous findings on opinion dynamics55-57. Deaprting from this point, we demonstrate that the marginal impact of threshold on collective opinion formation is nonlinear. Specifically, thresholds less than 0.5 contribute rapidly to the decentralized aggregation of opinions, thresholds greater than 0.5 contribute rapidly to the overall consensus of opinions. This finding can enrich the theory of cognitive acceptability in opinion dynamics with the involvement of LLMs. The output opinion values of LLMs have a significant and positive effect on the formation of opinions. The use strategy of LLMs has a significant impact on the convergence and distribution of opinions. Moreover, an interesting phenomenon that the use strategies of partial and full reliance on LLMs lead to almost exactly opposite effects on the convergence and distribution of opinions is observed, which may be linked to the multiple sources of opinion exchange that partially rely on LLMs.
2308.03313#41
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
41
Figure 5: Issue-2:Solve a purely mathematical problem by employing a SQL generator. Exclude the two birthplaces with the most singers, provide the number of singers from other birthplaces, and calculate the factorial of this number. The Tool_Query for the first execution of the tool is: {{"SQL Generator": "Not the two birthplaces with the most singers"}} The Tool_Query for the second execution of the tool is: { "SQL Generator": "Exclude the two birthplaces with the most singers, provide the number of singers from other birthplaces"}} The Tool_Query for the third execution of the tool is: {{"SQL Generator": "Exclude the two birthplaces with the most singers, provide the number of singers from other birthplaces, and calculate the factorial of this number"} } Figure 6: Issue-3: Unnecessary repetition of subtasks. Please use SQL language to query who are the singers who have not been nominated in the Golden Melody Awards? Give their names. Jay Chou, Cui J it LLMs Answer: Jay Chou, Cui Jian ms a D> rnternim Figure 7: Issue-4: Answering questions using common sense instead of generating code. # 4 Related Work
2308.03427#41
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
41
(-7.2) ↑ (+22.9) ↓ (-9.5) ↑ (+31.6) ↑ (+21.8) ↓ (-9.0) ↑ (+15.6) − (-3.6) ↓ (-6.8) ↑ (+23.2) ↓ (-5.3) ↑ (+18.5) ↓ (-7.9) ↑ (+21.5) ↓ (-4.6) ↑ (+20.1) ↓ (-4.8) ↑ (+20.9) ↓ (-5.3) ↑ (+20.3) ↑ (+23.3) ↓ (-4.4) ↑ (+15.8) − (-6.0) ↑ (+8.1) ↓ (-5.6) ↑ (+9.5) − (-2.6) ↓ (-4.4) ↑ (+16.0) ↓ (-6.3) ↑ (+28.6) ↓ (-5.7) ↑ (+27.8) ↓ (-8.2) ↑ (+26.5) ↓ (-5.4) ↑ (+25.1) ↓ (-6.3) ↑ (+27.0) ↓ (-3.7) ↑ (+26.6) ↑ (+26.6) ↓
2308.03656#41
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
41
In DB and DCG, LLM agents majorly encountered Invalid Format errors, meaning they do not correctly follow the instruction’s format requirements. The format verification is stringent for DB, and no retry opportunities are provided. Furthermore, the task’s expected output may be close to certain models’ training data, yet not precisely aligned with. This discrepancy can lead the models to revert to their pre-trained formatting, inadvertently overlooking the specific requirements we provide. (Cf. Appendix J.2.1) For DCG, its instruction could be longer and more complicated than other tasks due to the need to introduce game rules, making some LLMs feel confused. In HH and WB, another major issue is about Invalid Action, where LLM agents generate actions beyond predefined action spaces. These two tasks provide many discrete action options at each turn, and many LLMs fail to generate an action from them and, therefore, cause errors. For specific ratios of each LLM, please refer to Appendix J.1. Impact of Code Training. We find that code tuning might deeply influence a model’s way of inferential generation and thinking, even beyond topics just about coding. From the comparison of codellama and llama-2 series, tuning with code seems to give models an edge in tasks that follow a relatively static procedure (e.g., Web Shopping). But, this kind of tuning might also affect 8
2308.03688#41
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
42
We propose coping strategies for the problems that have been demonstrated for LLMs at this stage: bias and toxicity. We find that introducing agents randomly in the network with opposite/neutral/random opinions all significantly reduce the tendency of overall opinion negativization, and that the latter two approaches are more robust. We also explore relevant interventions for the potential risk of overdependence based on the correlation matrix, such as converting individuals who are overly dependent on LLMs to partial dependence. Our study provides different target-oriented strategies for the use and intervention of LLMs, they mainly increase the threshold of individuals, increase the proportion of NIN/NINL/NIL, and add opposite /neutral/random agents. There are many different implementation options for different use and 12 / 21 13 / 21 intervention strategies. For example, to increase the threshold of individuals, we can improve education and promote intercultural competence58; to increase the proportion of NINL, i.e., encourage the rationalized use of LLMs, we can improve people's technological literacy through various forms of publicity, education, and training activities, so that they can understand the advantages and disadvantages of big language models, and we also need to simultaneously develop relevant regulatory and normative measures to protect user data privacy, and avoid problems of model abuse and over-reliance.
2308.03313#42
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
42
Figure 7: Issue-4: Answering questions using common sense instead of generating code. # 4 Related Work The remarkable capacity for usage and creation of tools have facilitated the transcendence of our innate physical and cognitive constraints, thereby profoundly advancing the progress and prosperity of human civilization and society. The swift advancement of LLM has rendered it feasible to use and create tools like humans. The integration of specialized tools with LLM has unlocked substantial potential in addressing intricate tasks. In this section, we offer a concise synopsis of the relevant research pertaining to tool learning based on LLMs. # 4.1 Tool Usage The initial advancements in tool learning have been constrained by the capabilities of artificial intelligence (AI) models. [27] Traditional deep learning approaches exhibit limitations in terms of comprehension of tool functionality and user intentions, and common sense reasoning abilities. Consequently, these limitations directly result in a notable decline in the stability and precision of tool 15
2308.03427#42
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
42
↓ (-6.3) ↑ (+27.0) ↓ (-3.7) ↑ (+26.6) ↑ (+26.6) ↓ (-4.9) ↑ (+21.0) − (-2.3) ↑ (+27.1) − (-1.9) ↑ (+19.4) − (-3.6) ↓ (-3.7) ↑ (+24.2) ↓ (-6.2) ↑ (+17.8) ↑ (+26.8) ↓ (-8.0) ↑ (+23.3) − (-2.7) ↓ (-8.7) ↑ (+25.1) ↓ (-6.2) ↑ (+23.2) ↓ (-5.1) ↑ (+22.2) N 13.6±5.5 ↑ (9.9) ↑ (8.5) ↑ (+7.7) ↑ (+9.5) ↑ (+9.3) ↑ (+9.9) ↑ (+8.8) ↑ (+12.4) ↑ (+7.7) ↑ (5.2) ↑ (+8.8) ↑ (+10.1) ↑ (+14.8) ↑ (+7.2) ↑ (+17.5) ↑ (+18.2) ↑
2308.03656#42
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
42
8 # Technical Report (v0.2) the model’s general thinking ability, as codellama series does not perform as well in the Digital Card Game as llama-2 series. This points to a balance between being good at following procedures and being good at general thinking when tuning LLMs. Impact of High-Quality Alignment Data Training. Another helpful comparison would be between vicuna-13b and llama-2-13b. While they share the same base LLM, vicuna-13b is aligned by training on ShareGPT’s data (generated by gpt-4 and gpt-3.5-turbo, shared by users) and llama-2-13b is aligned from scratch. As a result, vicuna-13b outperforms llama-2-13b on AGENTBENCH, and even performs comparably to 3 times larger codellama-34b. This indicates that high-quality alignment is still a key to develop better LLM agents.
2308.03688#42
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
43
To the best of our knowledge, our study investigates for the first time the impact of LLMs on opinion dynamics, starting from different output opinion values of LLMs and different usage strategies. Our findings help promote a deeper public understanding of the influence of LLMs on opinion evolution and support groups and individuals in choosing the best usage strategies with respect to their own demands. Our study highlights that rationalizing the use of LLMs can significantly increase the diversity of opinions compared to not using them, but over-reliance can lead to the opposite situation. Despite the current prohibition of LLMs in numerous companies, schools, and organizations, our findings provide compelling evidence for the rational use of LLMs. Our study supports the notion that such use should not impede the development of artificial general intelligence (AGI) technologies, including LLMs, while also delivering additional benefits to both personal and professional spheres, such as improved productivity and diversified communication channels. Our study offers novel insights into the intricacies of public opinion dynamics in the era of LLMs. Moreover, our findings support the urgent request of theoretical studies of the evolution and formation of opinions in the age of LLMs and even in the future age of AGI.
2308.03313#43
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]
2308.03427
43
15 learning methodologies. Recently, the advent of LLM has marked a pivotal juncture in the realm of tool learning. LLMs encompass a broad spectrum of common sense cognitive capabilities and exhibit remarkable proficiencies in natural language processing, reasoning, and interactive decision-making [28–32]. These attributes furnish indispensable prerequisites for LLMs to comprehend user intentions and effectively employ tools in tackling intricate tasks [33]. Simultaneously, the advancement of fine-tuning [34–38] and in-context learning [39, 40] technology has offered robust support to LLM in addressing increasingly intricate challenges. In addition, tool usage can mitigate the inherent limitations of LLMs, encompassing the acquisition of up-to-date information from real-world events, refined mathematical computational abilities, and the mitigation of potential hallucinatory phenomena. [41]
2308.03427#43
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
http://arxiv.org/pdf/2308.03427
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
cs.AI
Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision Making
null
cs.AI
20230807
20231107
[ { "id": "2302.13971" }, { "id": "2304.08103" }, { "id": "2305.16504" }, { "id": "2304.06488" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2108.07258" }, { "id": "2303.17491" }, { "id": "2305.06223" }, { "id": "2305.17126" }, { "id": "2103.10385" }, { "id": "2305.16938" }, { "id": "2305.13246" }, { "id": "2305.05662" }, { "id": "2212.06817" }, { "id": "2304.04370" }, { "id": "2304.08244" }, { "id": "2303.16434" }, { "id": "2310.09611" }, { "id": "2303.10089" }, { "id": "2304.11015" }, { "id": "2303.03378" }, { "id": "2303.08128" }, { "id": "2303.14725" }, { "id": "2212.08073" }, { "id": "2305.14323" }, { "id": "2305.11738" }, { "id": "2305.14318" }, { "id": "2110.14168" }, { "id": "2305.08144" }, { "id": "2303.11381" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2303.18223" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2307.08674" }, { "id": "2304.09433" }, { "id": "2205.06175" }, { "id": "2305.19308" }, { "id": "2210.02406" }, { "id": "2304.13712" }, { "id": "2306.05301" }, { "id": "2305.14257" }, { "id": "2303.09014" }, { "id": "2306.07209" }, { "id": "2305.06849" }, { "id": "2304.08177" }, { "id": "2305.11554" }, { "id": "2205.12255" }, { "id": "2303.00905" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2210.02414" }, { "id": "2304.03893" }, { "id": "2106.09685" }, { "id": "2307.06135" }, { "id": "2207.05608" }, { "id": "2304.09842" }, { "id": "1809.09600" }, { "id": "2109.01652" }, { "id": "2302.07842" }, { "id": "2212.04088" }, { "id": "2101.00190" }, { "id": "2305.11854" } ]
2308.03656
43
↑ (+10.1) ↑ (+14.8) ↑ (+7.2) ↑ (+17.5) ↑ (+18.2) ↑ (+3.5) ↑ (+10.1) ↑ (+10.9) ↑ (+11.2) ↑ (+9.4) ↑ (+9.3) ↑ (+10.9) ↑ (+6.2) ↑ (+10.6) ↑ (+6.9) − (+3.7) ↑ (+6.2) ↑ (+13.1) ↑ (+15.5) ↑ (+14.4) ↑ (+11.1) ↑ (+13.1) ↑ (+12.1) ↑ (+10.7) ↑ (+11.8) ↑ (+17.1) ↑ (+6.4) ↑ (+12.1) ↑ (+11.1) ↑ (+8.5) ↑ (+11.1) ↑ (+13.5) ↑ (+11.1) ↑ (+10.4) Embarrassment
2308.03656#43
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to contribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as intelligent assistants.
http://arxiv.org/pdf/2308.03656
Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu
cs.CL
16 pages. Added demographic distribution of the user study. Added ethics statements and limitations
null
cs.CL
20230807
20240104
[ { "id": "2303.13648" }, { "id": "2310.04450" }, { "id": "2304.07333" }, { "id": "2306.03917" }, { "id": "2306.04308" }, { "id": "2307.11760" }, { "id": "2307.13779" }, { "id": "2312.11111" }, { "id": "2310.17976" }, { "id": "2307.00184" }, { "id": "2301.08745" }, { "id": "2204.12000" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2212.10529" }, { "id": "2309.05076" }, { "id": "2305.19926" }, { "id": "2206.07550" }, { "id": "2304.11111" }, { "id": "2311.04915" }, { "id": "2310.01386" }, { "id": "2305.02547" }, { "id": "2306.01248" } ]
2308.03688
43
Unexpected Similar Performance of llama-2-13b and llama-2-70b. During our experi- ments, we were surprised to find that llama-2-13b and llama-2-70b perform similarly despite the significant gap between their sizes. After carefully checking and re-running experiments, the results are unchanged. We think that it indicates llama-2-70b’s insufficient pre-training. While both llama-2-13b and llama-2-70b are pre-trained with 2T tokens, a larger LLM should be trained with more tokens according to the scaling law (Hoffmann et al., 2022). # 5 RELATED WORK
2308.03688#43
AgentBench: Evaluating LLMs as Agents
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.
http://arxiv.org/pdf/2308.03688
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
cs.AI, cs.CL, cs.LG
55 pages
null
cs.AI
20230807
20231025
[ { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2203.15556" }, { "id": "2303.17491" }, { "id": "2211.05100" }, { "id": "2105.13231" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2306.06070" }, { "id": "2107.03374" }, { "id": "2304.11477" }, { "id": "2108.07732" }, { "id": "2211.09110" }, { "id": "2307.09288" }, { "id": "2302.01560" }, { "id": "2110.14168" }, { "id": "2308.12950" }, { "id": "2306.14898" }, { "id": "2210.02414" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2305.14314" }, { "id": "2105.09938" } ]
2308.03313
44
Our analysis provides new insights for general, archetypal opinion dynamics simulating approaches. It has however some limitations since realistic opinion interaction situations are far more complex than theoretical models. Our study considers a number of phenomena in realistic opinion dynamics, including the influence of authority, the influence of stubbornness, and the influence of arbitrary events outside the network. However, the mechanisms by which people express and exchange their opinions in reality are more complex. For example, some people express their opinions differently from their internal opinions, some people will amplify their opinions when expressing them, and even some people express different opinions to different people at the same time, on the same topic.
2308.03313#44
Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
http://arxiv.org/pdf/2308.03313
Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan
cs.SI, cs.CY
21 pages, 4figures,2tables
null
cs.SI
20230807
20230826
[ { "id": "2201.01322" } ]