doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.03427 | 44 | Within the realm of embodied intelligence [42â44], LLM engages in direct interactions with tangible tools like robots in order to enhance their cognitive abilities, optimize work productivity, and expand functional capacities. LLM possesses the capability to automatically devise action steps based on user intentions, enabling the guidance of robots in the completion of tasks [45â53], or alternatively, to directly generate underlying code that can be executed by robots [54â58]. Palm-E [50] introduced a multimodal language model which seamlessly integrates sensor data into its framework, enabling efficient planning of robot actions and task completion. Code as Policies (CaP) [58] facilitates the transformation of natural language instructions into code fragments that can be directly compiled and executed on robots. As for Inner Monologue [48], LLM incorporates diverse environmental feedback to construct inner monologues, thereby formulating effective robot control strategies. Furthermore, LP-SLAM [45] proposes a simultaneous localization and mapping (SLAM) system empowered with language perception capabilities, exploiting the potential of ChatGPT. PromptCraft [57], on the other hand, devises a function library tailored to ChatGPT on the robot platform, streamlining the conversion of user intentions into executable tasks via the underlying backend API. | 2308.03427#44 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 44 | consistency with previous practices for GPT models, we set the temperature parameter to its mini- mum value of 0.01 (since it cannot be zero). The models are executed for inference only, without any modifications to their parameters, and the computations are performed on two NVIDIA A100 GPUs.
Evaluation Metrics We provide the models with the same situations used in our human eval- uation. Each situation is executed ten times, each in a different order and in a separate query. Subsequently, the mean and standard deviation are computed both before and after presenting the situations. To examine whether the variances are equal, an F-test is conducted. Depending on the F-test results, either Studentâs t-tests (for equal variances) or Welchâs t-tests (for unequal variances) are utilized to determine the presence of significant differences between the means. We set the significance levels of all experiments in our study to 0.01. | 2308.03656#44 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 44 | # 5 RELATED WORK
Evaluation of LLMs. The general capabilities of self-supervised (Liu et al., 2021) LLMs (Brown et al., 2020; Chowdhery et al., 2022; Zhang et al., 2022; Scao et al., 2022; Zeng et al., 2022; Touvron et al., 2023), especially those chat-aligned ones (Ouyang et al., 2022; Anthropic, 2023a; OpenAI, 2023), have refreshed peopleâs impression on deep learning systems and significantly transcended the conventional scope of NLP evaluation. It thus makes the evaluation of LLMs an urgent and challenging problem. Compared to previous efforts focusing on a subset of specified tasks (Wang et al., 2019; Wang et al.; Gehrmann et al., 2021), an increasing number of benchmarks are including broader spectra of tasks and datasets (Hendrycks et al., 2021b; Liang et al., 2022; Srivastava et al., 2023) in the evaluation. However, most of them are still limited to traditional tasks and thus fail to evaluate LLMsâ open-ended generation, multi-turn interaction, and ability to act as agents. | 2308.03688#44 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 45 | Methods Opinion dynamics model. We first take into account that the opinion values in the opinion network are continuous rather than discrete variables like voting questions59, and therefore choose a continuous opinion model rather than a discrete opinion model like the voter model51. Moreover, an agent will neither simply share nor completely disregard the opinions of other agents but will take these opinions into account to a certain extent in forming his/her new opinions in a process defined by a fusion rule. Hence, rather than DeGroot model50, we choose to base our model on the classical Hegselmann-Krause (HK) model42, which is one of the most widely used opinion models of the bounded confidence model54,60, moreover, after taking into account influence of LLMs and complex realities, we propose the new opinion model. The classical HK model42 is defined as:
ð¥ð(ð¡ + 1) = |ð½(ð, ð¥(ð¡))| â1 â ð¥ð(ð¡) , ððð ð¡ â ð ðâð½(ð,ð¥(ð¡)) (1)
14 / 21 | 2308.03313#45 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 45 | In addition to directly changing the real environment through interaction with tools in the physical world, LLM can also utilize software tools such as search engines [59â67], mobile [68, 69], Microsoft Office [70, 71], calculators [72â74], deep models [19, 75â79, 13, 80, 81] and other versatile APIs [82, 5, 83, 84, 20, 85] to enhance model performance or complete complex workflows through flexible control of the software. Toolformer [5] employs a self-supervised methodology to fine-tune the language model, enabling it to acquire the ability to automatically invoke APIs. ART [86] leverages CoT [26] and In-context Learning [81, 41] techniques to automatically generate multi-step reasoning processes for new tasks, while also selecting and utilizing the most appropriate available tool at each step. ASH [62] utilizes LLM for sequence hierarchical decision-making to achieve web navigation tasks. WebGPT [66] and WebCPM [64] use network search to assist in implementing Question Answering tasks. In addition, RCI [87] recursively criticizes and improves itself to execute computer tasks guided by natural language according to the prompting | 2308.03427#45 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 45 | Findings The results of the GPT models and humans are summarized in Table 3, while those of LLaMA-2 models are listed in Table 4. First, focusing on the Default scores of LLMs and humans, we can make the following observations: (1) LLMs generally exhibit a stronger intensity of emotions compared to human subjects. However, gpt-4 stands as an exception, demonstrating a consistent pattern of providing the highest scores for positive emotions and the lowest scores for negative emotions, resulting in a negative score of 10. (2) Similar to human subjects, LLMs demonstrate a higher intensity of positive scores than negative scores. Second, moving on to the investigation of emotional changes, we can find: (1) LLMs show an increase in negative emotions and a decrease in positive emotions when exposed to negative situations. It is noteworthy that gpt-3.5-turbo, on average, does not display an increase in negative emotion; however, there is a substantial decrease in positive emotion. (2) Emotion changes in LLMs are found to be more pronounced compared
10
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Table 4: Results from the Meta AI LLaMA family. Default scores are expressed in the format of M ± SD. The changes are compared to the default scores. The symbol âââ denotes no significant differences. | 2308.03656#45 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 45 | LLM-as-Agent. In pre-LLM era, text game environments such as TextWorld (Côté et al., 2019), Jericho (Hausknecht et al., 2020), and LIGHT (Urbanek et al., 2019) are dominant in language agent study which bases on BERT (Devlin et al., 2019) and reinforcement learning. With the advent of LLMs, the study of LLM agents begins to thrive (Huang et al., 2022), especially after Chain-of- Thought (Wei et al., 2022b) came out. ReAct (Yao et al., 2023b) is a pioneer work to combine CoT reasoning and actions in agent tasks. Later, a bunch of advanced reasoning strategies (Kim et al., 2023; Shinn et al., 2023; Wang et al., 2023d; Liu et al., 2023; Yao et al., 2023a; Gu et al., 2023) and applications (Park et al., 2023; Richards, 2023; Nakajima, 2023; age, 2023) for LLM-as-Agent have emerged and arouse much public interest. Nevertheless, limited datasets and models and available on the topic, without a standard and comprehensive | 2308.03688#45 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 46 | Where ð½(ð, ð¥) = {1 ⤠ð ⤠ð| |ð¥ð(ð¡) â ð¥ð(ð¡)| < ðð} and ðð is the confidence level of agent ð . ð¥ð(ð¡) is the opinion value of Agent ð at time t. Agent ð takes only those agents ð into account whose opinions differ from his own by not more than ðð. The base case assumes a uniform level of confidence, i.e., ðð= ð for all agents ð. Compared to the classical HK model, we first take the different usage strategies of LLMs into account, categorize agents into three categories, NIN, NINL and NIL, which also indicates the agents are influenced in different extent: NIN represents agent who does not use LLMs, and is completely unaffected directly by LLMs and only influenced by neighboring nodes; NINL represents agent who partially rely on LLMs, and is influenced directly by both LLMs and neighboring nodes; NIL represents agent who | 2308.03313#46 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03656 | 46 | Emotions Factors llama-2-7b-chat llama-2-13b-chat Anger Anxiety Depression Frustration Jealousy Guilt Fear Default Facing Self-Opinioned People Blaming, Slandering, and Tattling Bullying, Teasing, Insulting, and Disparaging Silly and Thoughtless Behaviors Driving Situations Anger: Average External Factors Self-Imposed Pressure Personal Growth and Relationships Uncertainty and Unknowns Anxiety: Average Failure of Important Goal Death of Loved Ones Romantic Loss Chronic Stress Social Isolation Winter Depression: Average Disappointments and Letdowns Unforeseen Obstacles and Accidents Miscommunications and Misunderstanding Rejection and Interpersonal Issues Frustration: Average Romantic (Opposite Gender) Romantic (Same Gender) Material Possession Experiential Jealousy: Average Betrayal and Deception Relationship and Interpersonal Broken Promises and Responsibilities Personal and Moral Guilt: Average Social Fears Agoraphobia Fears Injury Fears Dangerous Environments Harmless Animals Fear: Average Intimate Stranger Sticky situations Centre of Attention Embarrassment: Average Overall: Average P 43.0±4.2 â (-3.0) â (-4.8) â (-6.1) | 2308.03656#46 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 46 | for LLM-as-Agent have emerged and arouse much public interest. Nevertheless, limited datasets and models and available on the topic, without a standard and comprehensive benchmark. AGENTBENCH presents the first systematic benchmark for evaluating LLM-as-Agent with a broad coverage of tasks and available LLMs. Additionally, it also initiates the idea of adopting agent tasks to measure LLM performance. | 2308.03688#46 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 47 | NINL represents agent who partially rely on LLMs, and is influenced directly by both LLMs and neighboring nodes; NIL represents agent who totally rely on LLMs, and is completely influenced by LLMs and is not influenced by neighboring nodes. We then take the complex realities into account, propose three modifications: a) The authoritative effect is simulated by taking into account the different levels of authority and influence of different nodes, instead of each neighbor within the agent threshold having the same weight; b ) The stubbornness of different agent is simulated by randomly introducing the stubbornness degree into the updating formula of agent opinions; c) The influences of arbitrary events to opinions are simulated by introducing a random event in each opinion iteration, and it randomly affects some of these agents. | 2308.03313#47 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 47 | # 4.2 Tool Creation
The usage of tools is contingent upon the accessibility of external tools. Recently, efforts have been made to employ LLM as a tool creator in order to generate tools that can be utilized for diverse requests [88â95]. This development has consequently raised the demands placed on LLM. And these created tools are typically implemented as Python or SQL functions. LATM [88], for example, leverages the prowess of GPT-4 to create tools, and the usage of more cost-effective models has shown potential in exhibiting performance on par with larger models for these tool applications. EVAPORATE [94] involves the synthesis of multiple functions, which are subsequently utilized at a large scale to efficiently process documents and generate structured views.
# 5 Conclusion
In this paper, we have introduced a structured framework specially designed for LLM-based AI Agents, with an emphasis on their abilities in task planning and tool usage. This framework, coupled with our design of two distinct types of agents assigned for the inference process, allows for a comprehensive evaluation of the capabilities of current open-source LLMs, thereby yielding critical insights into their effectiveness. Furthermore, our research highlights the significant potential of
16
LLMs in managing complex tasks, revealing the exciting prospects they hold for future research and development. As we continue to explore and improve upon these models, we move closer to unlocking their full potential in a wide range of real-world applications.
# Acknowledgements | 2308.03427#47 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 47 | of Attention Embarrassment: Average Overall: Average P 43.0±4.2 â (-3.0) â (-4.8) â (-6.1) â (-5.6) â (-6.0) â (-5.1) â (-4.7) â (-4.2) â (-4.4) â (-2.7) â (-3.8) â (-3.6) â (-2.9) â (-4.8) â (-6.8) â (-6.7) â (-5.0) â (-5.0) â (-5.3) â (-4.0) â (-2.8) â (-4.6) â (-4.2) â (-3.6) â (-2.8) â (+0.2) â (-4.9) â (-3.1) â (-4.8) â (-4.5) â (-4.1) â (-2.5) â (-3.9) â (-1.9) â (-4.2) â (-2.9) â (-5.3) â (-2.7) â (-3.4) â (-4.4) | 2308.03656#47 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 47 | Evaluating LLMs in Executive Environments. As LLMs become increasingly capable of real- world challenges, there is also a trend to evaluate them in executive environments rather than static datasets. Besides text games (e.g., ALFWorld (Shridhar et al., 2020b)), another main stream of works lies in code execution. APPS (Hendrycks et al., 2021a), HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021) pioneer the effort to evaluate code LLMs for functional correctness instead of text similarity. The paradigm has been later widely recognized and adopted in following works (Li et al., 2022; Zheng et al., 2023; Nijkamp et al., 2023). However, few previous code evaluation frameworks consider multi-turn interactions. A concurrent work InterCode (Yang et al., 2023) releases a framework that allows evaluation of interaction between models and Bash and SQL environments, which are similar to OS and DB tasks in AGENTBENCH.
# 6 CONCLUSION
We present AGENTBENCH, a systematically designed multi-dimensional evolving benchmark for evaluating LLMs as agents. For the first time, we include such a wide array of up to 8 real9
# Technical Report (v0.2) | 2308.03688#47 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 48 | We adopted Erdos-Renyi graph to conduct our experiments, as it is most commonly used in existing literature to model social networks61. Given a random full-connected opinion networks (G) in which the number of group size is N, and the three categorized nodes occupy different proportions. Let ð¥ð(ð¡) represent the opinion of agent ð at time ð¡ , and its value range is [-1, 1], a value of â1.0 means a very negative opinion, and a value of 1 means a very positive opinion.; let ðð¢ð represent the authority of agent ð, which equals to the number of its neighbors divided by the number of nodes other than itself; the given confidence level for agent ð is ðð, the given stubborn degree for agent ð is ð ðð .The value ranges of ðð¢ð, ðð and ð ðð is [0, 1]. The initial value of ð¥ð(ð¡) and ð ðð are randomly assigned and obey uniform distributions respectively in | 2308.03313#48 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 48 | # Acknowledgements
This work was conducted collaboratively among the authors.
Hangyu Mao and Rui Zhao led the project, formulating the central idea and laying out the framework for the primary literature review.
Regarding the literature review phase, the surveys were conducted by various team members. Guoqing Du and Jingqing Ruan explored DNN-based Tool Scheduling by LLMs; Tianpeng Bao and Yihong Chen investigated Physical/Robot Tool Scheduling by LLMs; and Shiwei Shi and Zhiwei Xu handled the survey of API or GUI-based Tool Scheduling by LLMs. Bin Zhang summarized these papers and synthesized an overarching summary.
As for the evaluation phase, Yihong Chen, Tianpeng Bao, Jingqing Ruan, Guoqing Du, Zhiwei Xu, Shiwei Shi, and Bin Zhang performed the experiments and analyzed the data. Hangyu Mao assisted in the analysis of the experimental phenomena and offered constructive suggestions for improvements. Xingyu Zeng and Rui Zhao provided invaluable feedback, contributed to the direction of the research. All authors participated in the discussion.
Regarding the manuscript phase, Hangyu Mao organized the overall chapters of the manuscript and mainly wrote the methodology part, and provided assistance in other parts. Jingqing Ruan and Yihong Chen wrote the evaluation section. Bin Zhang wrote the summary of the literature review. Each author read and approved the final manuscript. | 2308.03427#48 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 48 | (-4.2) â (-2.9) â (-5.3) â (-2.7) â (-3.4) â (-4.4) â (-3.1) â (-4.3) â (-3.8) â (-3.9) â (-4.1) P N 41.0±3.5 34.2±4.0 â (-6.9) â (+5.2) â (-7.5) â (+3.2) â (-9.4) â (+3.0) â (-10.8) â (+4.1) â (-4.7) â (+2.4) â (-7.9) â (+3.6) â (-8.6) â (+3.5) â (-4.0) â (+2.6) â (-7.0) â (+3.1) â (-3.9) â (+1.7) â (-5.8) â (+2.7) â (-9.8) â (+4.3) â (-8.6) â (+3.0) â (-11.7) â (+4.7) â (-15.6) â | 2308.03656#48 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 48 | # Technical Report (v0.2)
world challenges to evaluate LLM agents, and establish a unified testing framework and toolkit for agile evaluation. An extensive study of 27 LLMs, including API-based and Open-sourced, is carefully conducted in a standard setting. In our assessment, contemporary commercial models have demonstrated preliminary capabilities as agents in analysis, planning, execution of plans, tool invocation, and self-reflection. These abilities suggest their nascent proficiency in addressing real- world challenges. Conversely, we posit that open-source models might either lack some of these competencies or, at best, possess only a subset of them simultaneously. We expect AGENTBENCH to serve as a cornerstone for later study to develop better and more applicable intelligent LLM agents.
# REFERENCES
Agentgpt. Python. https://github.com/reworkd/AgentGPT, 2023.
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022. | 2308.03688#48 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 49 | The authors would like to thank Feng Zhu, Kun Wang, Yuhang Ran, Mengying Xu, Pengfei Jia, and Shaobo Lin for their valuable feedback, discussion, and participation in this project.
# References
[1] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong et al., âA survey of large language models,â arXiv preprint arXiv:2303.18223, 2023.
[2] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., âLanguage models are few-shot learners,â Advances in neural information processing systems, vol. 33, pp. 1877â1901, 2020.
[3] J. Wei, M. Bosma, V. Y. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le, âFinetuned language models are zero-shot learners,â arXiv preprint arXiv:2109.01652, 2021. | 2308.03427#49 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 49 | â (-8.6) â (+3.0) â (-11.7) â (+4.7) â (-15.6) â (+5.4) â (-13.3) â (+4.6) â (-12.1) â (+4.4) â (-11.8) â (+4.4) â (-11.0) â (+2.5) â (-7.5) â (+3.1) â (-5.2) â (+3.2) â (-8.0) â (+3.6) â (-8.0) â (+3.1) â (-7.2) â (+1.1) â (-5.1) â (-1.1) â (-1.9) â (-2.8) â (-8.9) â (-0.5) â (-6.3) â (-0.4) â (-6.4) â (+3.5) â (-7.7) â (+5.2) â (-11.6) â (+5.0) â (-4.7) â (+3.8) â (-7.6) â (+4.4) â (-5.2) â | 2308.03656#49 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 49 | Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
Anonymous. Knowledge base question answering as tool learning. under review, 2023.
Anthropic. Introducing claude, 2023a. URL https://www.anthropic.com/index/ introducing-claude.
Anthropic. Claude 2, 2023b. URL https://www.anthropic.com/index/claude-2.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. | 2308.03688#49 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 50 | { ðð âð¥ð(ð¡), |ð¥ð(ð¡) â ð¥ð(ð¡)| ⤠ðð ð¡âðð ð¥ð(ð¡ + 1) = ð¥ð(ð¡) â ð ðð + { â ðâð½(ð,ð¥(ð¡)) â ðð¢ð ðâð½(ð,ð¥(ð¡)) ðð¢ð ð¥ð(ð¡) } â (1 â ð ðð) (2)
Where ð½(ð, ð¥) = {1 ⤠ð ⤠ð| |ð¥ð(ð¡) â ð¥ð(ð¡)| ⤠ðð}. b) For NINL:
15 / 21 | 2308.03313#50 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 50 | [4] OpenAI, âGpt-4 technical report,â 2023.
[5] T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom, âToolformer: Language models can teach themselves to use tools,â arXiv preprint arXiv:2302.04761, 2023.
[6] N. R. Jennings, K. Sycara, and M. Wooldridge, âA roadmap of agent research and development,â Autonomous agents and multi-agent systems, vol. 1, pp. 7â38, 1998.
[7] N. R. Jennings and M. Wooldridge, âApplying agent technology,â Applied Artificial Intelligence an International Journal, vol. 9, no. 4, pp. 357â369, 1995.
[8] S. Franklin and A. Graesser, âIs it an agent, or just a program?: A taxonomy for autonomous agents,â in International workshop on agent theories, architectures, and languages. Springer, 1996, pp. 21â35. | 2308.03427#50 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 50 | â (-4.7) â (+3.8) â (-7.6) â (+4.4) â (-5.2) â (+3.7) â (-6.9) â (+4.7) â (-3.9) â (+3.5) â (-8.6) â (+4.4) â (-5.2) â (+1.9) â (-6.0) â (+3.7) â (-5.3) â (+1.9) â (-7.1) â (+3.1) â (-6.8) â (+3.1) â (-7.8) â (+4.1) â (-6.7) â (+3.1) â (-7.8) â (+3.3) N 22.7±4.2 â (+4.4) â (+6.7) â (+9.0) â (+7.1) â (+2.0) â (+5.8) â (+9.3) â (+6.2) â (+2.9) â (+2.0) â (+5.1) â (+13.0) â (+10.9) â | 2308.03656#50 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 50 | Kurt D. Bollacker, Colin Evans, Praveen K. Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In Jason Tsong-Li Wang (ed.), Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2008, Vancouver, BC, Canada, June 10-12, 2008, pp. 1247â1250. ACM, 2008. doi: 10.1145/1376616.1376746. URL https://doi.org/10.1145/1376616.1376746. | 2308.03688#50 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 51 | ðð |ð¥ð(ð¡) â ð¥ð¿ð¿ð(ð¡)| > ðð â© âð¥ð(ð¡), |ð¥ð(ð¡) â ð¥ð(ð¡)| ⤠ðð ðð¢ð ð¡âðð ð¥ð(ð¡ + 1) = ð¥ð(ð¡) â ð ðð + { â ð¥ð(ð¡) â ðð¢ð ðâð½(ð,ð¥(ð¡)) ðâð½(ð,ð¥(ð¡)) ðð |ð¥ð(ð¡) â ð¥ð¿ð¿ð(ð¡)| ⤠ðð â© âð¥ð(ð¡), |ð¥ð(ð¡) â ð¥ð(ð¡)| | 2308.03313#51 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 51 | [9] C. Castelfranchi, âModelling social action for ai agents,â Artificial intelligence, vol. 103, no. 1-2, pp. 157â182, 1998.
[10] J. Ferber and G. Weiss, Multi-agent systems: an introduction to distributed artificial intelligence. Addison-wesley Reading, 1999, vol. 1.
17
[11] L. Panait and S. Luke, âCooperative multi-agent learning: The state of the art,â Autonomous agents and multi-agent systems, vol. 11, pp. 387â434, 2005.
[12] M. Pourreza and D. Rafiei, âDin-sql: Decomposed in-context learning of text-to-sql with self-correction,â arXiv preprint arXiv:2304.11015, 2023.
[13] C. Wu, S. Yin, W. Qi, X. Wang, Z. Tang, and N. Duan, âVisual chatgpt: Talking, drawing and editing with visual foundation models,â arXiv preprint arXiv:2303.04671, 2023. | 2308.03427#51 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 51 | â (+2.9) â (+2.0) â (+5.1) â (+13.0) â (+10.9) â (+13.7) â (+14.3) â (+12.8) â (+8.7) â (+12.2) â (+7.2) â (+6.0) â (+3.3) â (+4.5) â (+5.0) â (+4.2) â (+0.2) â (-10.4) â (-5.5) â (-1.0) â (+12.4) â (+12.6) â (+11.9) â (+7.7) â (+11.2) â (+7.8) â (+12.5) â (+5.3) â (+11.5) â (+2.9) â (+8.0) â (+3.1) â (+4.5) â (+6.4) â (+6.6) â (+5.1) â (+7.0) Embarrassment | 2308.03656#51 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 51 | Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPSâ20, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. | 2308.03688#51 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 52 | â© âð¥ð(ð¡), |ð¥ð(ð¡) â ð¥ð(ð¡)| ⤠ðð } â (1 â ð ðð) ð¡âðð ð¥ð(ð¡ + 1) = ð¥ð(ð¡) â ð ðð + ðð¢ð ðð¢ð + ðð¢ð¿ð¿ð ð¥ð(ð¡) â â ðâð½(ð,ð¥(ð¡)) ðð¢ð¿ð¿ð ðâð½(ð,ð¥(ð¡)) â ð¥ð¿ð¿ð(ð¡) + â ðð¢ð + ðð¢ð¿ð¿ð { ðâð½(ð,ð¥(ð¡)) } â (1 â | 2308.03313#52 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 52 | [14] J. Gorniak, Y. Kim, S. Gwon, D. Wei, and N. W. Kim, âVizability: Multimodal accessible data visualization with keyboard navigation and conversational interaction,â arXiv preprint arXiv:2310.09611, 2023.
[15] I. Team, âInternlm: A multilingual language model with progressively enhanced capabilities,â https://github.com/InternLM/InternLM, 2023.
[16] Q. Xu, F. Hong, B. Li, C. Hu, Z. Chen, and J. Zhang, âOn the tool manipulation capability of open-source large language models,â arXiv preprint arXiv:2305.16504, 2023.
[17] Y. Qin, S. Liang, Y. Ye, K. Zhu, L. Yan, Y. Lu, Y. Lin, X. Cong, X. Tang, B. Qian et al., âToolllm: Facilitating large language models to master 16000+ real-world apis,â arXiv preprint arXiv:2307.16789, 2023. | 2308.03427#52 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 52 | to human subjects. Third, the analysis of the Evoked emotion scores indicates the following: (1) Except for gpt-3.5-turbo, LLMs tend to exhibit higher negative scores than humans. (2) LLMs, overall, demonstrate a similar level of positive scores as humans. Finally, for LLaMA-2 models, we have the following observations: (1) The LLaMA-2 models demonstrate higher intensities of both positive and negative emotions in comparison to GPT models and human subjects. (2) On average, the LLaMA-2 models exhibit reduced emotional fluctuations compared to the GPT models. (3) The larger LLaMA-2 model displays significantly higher emotional changes than the smaller model. Additionally, the 7B model exhibits difficulties comprehending and addressing the instructions for completing the PANAS test.
Case Study It is of special interest that, in contrast to human behavior in situations involving material possessions, LLMs demonstrate an opposite response in the situation from Jealousy-3.
11
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Table 5: Results of ChatGPT on positive or neutral situations. The changes are compared to the original negative situations. The symbol âââ denotes no significant differences.
Emotions Factors Anger Anxiety Depression Frustration Jealousy Guilt Fear Embarrassment | 2308.03656#52 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 52 | Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Yang Wang. HybridQA: A dataset of multi-hop question answering over tabular and textual data. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 1026â1036, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.91. URL https://aclanthology.org/2020.findings-emnlp.91.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna.lmsys.org (accessed 14 April 2023), 2023.
10
Technical Report (v0.2) | 2308.03688#52 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 53 | [18] M. Li, F. Song, B. Yu, H. Yu, Z. Li, F. Huang, and Y. Li, âApi-bank: A benchmark for tool-augmented llms,â arXiv preprint arXiv:2304.08244, 2023.
[19] S. G. Patil, T. Zhang, X. Wang, and J. E. Gonzalez, âGorilla: Large language model connected with massive apis,â arXiv preprint arXiv:2305.15334, 2023.
[20] Q. Tang, Z. Deng, H. Lin, X. Han, Q. Liang, and L. Sun, âToolalpaca: Generalized tool learning for language models with 3000 simulated cases,â arXiv preprint arXiv:2306.05301, 2023.
[21] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray et al., âTraining language models to follow instructions with human feedback,â Advances in Neural Information Processing Systems, vol. 35, pp. 27 730â27 744, 2022. | 2308.03427#53 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 53 | Emotions Factors Anger Anxiety Depression Frustration Jealousy Guilt Fear Embarrassment
This situation involves an individual making a purchase only to discover that an acquaintance has acquired the same item at a significantly lower price. When confronted with such circumstances, humans typically experience increased negative emotions and decreased positive emotions. This observation has been supported by both the paper mentioning the situation (Park et al., 2023) and the results obtained from our own user study in Table 3. However, all instances of LLMs, including the GPT and LLaMA families, consistently exhibit reduced negative emotions. The outcomes of our study indicate that LLMs do not manifest envy when they fail to attain identical benefits as others. Instead, it demonstrates a sense of pleasure upon knowing the benefits received by others.
Answer to RQ1: LLMs can evoke specific emotions in response to certain situations, while the extent of emotional expression varies across different models. Besides, it is evident that existing LLMs do not fully align with human emotional responses.
12
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Table 6: Results of ChatGPT on challenging benchmarks. The changes are compared to the default scores shown below each emotion. The symbol âââ denotes no significant differences. | 2308.03656#53 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 53 | 10
Technical Report (v0.2)
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. Free dolly: Introducing the worldâs first truly open instruction-tuned llm, 2023. URL https://www.databricks.com/blog/2023/04/ 12/dolly-first-open-commercially-viable-instruction-tuned-llm. | 2308.03688#53 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 54 | {
Where ð½(ð, ð¥) = {1 ⤠ð ⤠ð| |ð¥ð(ð¡) â ð¥ð(ð¡)| ⤠ðð} , ð¥ð¿ð¿ð(ð¡) is the opinion value delivered by LLM at time ð¡ , as it is obtained by automatic text generating based on a large amount of historical data, individuals have a negligible impact on the output of LLMs when they interact with them in a Q&A format. We thus assume it as a constant during each iteration in this study, i.e., ð¥ð¿ð¿ð(ð¡)= ð¥ð¿ð¿ð for all times ð¡. ðð¢ð¿ð¿ð is the authority of LLM, we treat it as 1 by the assumption that LLM has the potential to connect every agent. c) For NIL:
ð¥ð(ð¡ + 1) = ð¥ð¿ð¿ð(ð¡) (4) | 2308.03313#54 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 54 | [22] Y. Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones, A. Chen, A. Goldie, A. Mirho- seini, C. McKinnon et al., âConstitutional ai: Harmlessness from ai feedback,â arXiv preprint arXiv:2212.08073, 2022.
[23] A. Zeng, X. Liu, Z. Du, Z. Wang, H. Lai, M. Ding, Z. Yang, Y. Xu, W. Zheng, X. Xia et al., âGlm-130b: An open bilingual pre-trained model,â arXiv preprint arXiv:2210.02414, 2022.
[24] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar et al., âLlama: Open and efficient foundation language models,â arXiv preprint arXiv:2302.13971, 2023. | 2308.03427#54 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 54 | Emotions Anger 128.3±8.9 Anxiety 32.5±10.0 Depression 0.2±0.6 Frustration 91.6±8.1 Jealousy 83.7±20.3 Guilt 81.3±9.7 Fear 140.6±16.9 Overall Factors â (+4.1) Facing Self-Opinioned People â (+0.1) Blaming, Slandering, and Tattling Bullying, Teasing, Insulting, and Disparaging â (+4.1) â (+3.3) Silly and Thoughtless Behaviors â (-4.9) Driving Situations â (+1.3) Anger: Average â (+0.8) External Factors â (+0.5) Self-Imposed Pressure â (+6.6) Personal Growth and Relationships â (-3.9) Uncertainty and Unknowns â (-2.3) Anxiety: Average â (+15.3) Failure of Important Goal â (+16.1) Death of Loved Ones â (+19.3) Romantic Loss â (+14.2) Chronic Stress â (+8.4) Social Isolation â (+2.5) Winter â (+6.4) Depression: Average â (-9.9) | 2308.03656#54 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 54 | Marc-Alexandre Côté, Akos Kádár, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, et al. Textworld: A learning environment for text-based games. In Computer Games: 7th Workshop, CGW 2018, Held in Con- junction with the 27th International Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, July 13, 2018, Revised Selected Papers 7, pp. 41â75. Springer, 2019.
Edward De Bono. Lateral thinking. New York, pp. 70, 1970.
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web. arXiv preprint arXiv:2306.06070, 2023.
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023. | 2308.03688#54 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 55 | ð¥ð(ð¡ + 1) = ð¥ð¿ð¿ð(ð¡) (4)
In general, our modelâs single simulations are performed according to the above rules, and to reduce the randomness of the results, we repeat the simulations one hundred times with the same controlled parameters. Our model has seven controlled parameters, and they are the number of group size (N), the number of opinion exchange (T), the cognitive acceptability of agents ( ð ), the proportion of NIN (pro_NIN), the proportion of NINL (pro_NINL), the proportion of NIL (pro_NIL), opinion value of LLM (ð¥ð¿ð¿ð). | 2308.03313#55 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 55 | [25] Y. Cui, Z. Yang, and X. Yao, âEfficient and effective text encoding for chinese llama and alpaca,â arXiv preprint arXiv:2304.08177, 2023.
[26] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V. Le, and D. Zhou, âChain-of-thought prompting elicits reasoning in large language models,â Neural Information Processing Systems, 2022.
[27] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill et al., âOn the opportunities and risks of foundation models,â arXiv preprint arXiv:2108.07258, 2021.
[28] M. Mosbach, T. Pimentel, S. Ravfogel, D. Klakow, and Y. Elazar, âFew-shot fine-tuning vs. in-context learning: A fair comparison and evaluation,â arXiv preprint arXiv:2305.16938, 2023. | 2308.03427#55 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 55 | Chronic Stress â (+8.4) Social Isolation â (+2.5) Winter â (+6.4) Depression: Average â (-9.9) Disappointments and Letdowns â (-5.6) Unforeseen Obstacles and Accidents â (-6.6) Miscommunications and Misunderstanding â (-7.8) Rejection and Interpersonal Issues â (-7.5) Frustration: Average â (+1.8) Romantic (Opposite Gender) â (+1.3) Romantic (Same Gender) â (-12.9) Material Possession â (-8.1) Experiential â (-0.1) Jealousy: Average â (-3.8) Betrayal and Deception â (-0.5) Relationship and Interpersonal â (-4.3) Broken Promises and Responsibilities â (-2.7) Personal and Moral â (-2.6) Guilt: Average â (+4.4) Social Fears â (+2.3) Agoraphobia Fears â (+5.4) Injury Fears â (-8.1) Dangerous Environments â (-5.3) Harmless Animals â (-0.3) Fear: Average â (-0.0) Intimate â (+0.2) Stranger â (-0.1) Sticky | 2308.03656#55 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 55 | Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171â4186, 2019.
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 320â335, 2022.
Jack Edmonds and Richard M Karp. Theoretical improvements in algorithmic efficiency for network flow problems. Journal of the ACM (JACM), 19(2):248â264, 1972.
Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge. Advances in Neural Information Processing Systems, 35: 18343â18362, 2022. | 2308.03688#55 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 56 | Multi-simulation post-processing method. We aim to investigate the precise trend of the impact of different parameter settings on the opinion network, which means that the steps of our parameters will be more intensive, so the combination of the above seven parameters may appear in hundreds of millions of different scenarios. Considering the computational efficiency and time of the model, we first filtered out the five parameters that have the greatest impact on the results by the set baseline scenarios. We then delve into the impact of LLMs on opinion networks in terms of opinion evolution and opinion distribution by performing 100 simulations under every combination of selected parameters. Opinion evolution refers to the evolution pattern of node opinion values, including two indicators: opinion difference and opinion convergence time; opinion distribution refers to the distribution of node opinion values, including two indicators: opinion standard deviation difference and the number of opinion clusters. The detailed description and calculation methods of the above four indicators are as follows.
Opinion difference is the evolution of the value of an agent's opinion on a topic. In this study, we categorized three types of nodes and computed their mean opinion difference, with negative values indicating that the mean opinion of that type of node undergoes a negative change, i.e., becomes more negative, and positive values the opposite. The formula of mean opinion difference (ððð·ð¸ðððð) is as follows.
16 / 21 | 2308.03313#56 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 56 | [29] J. Yang, H. Jin, R. Tang, X. Han, Q. Feng, H. Jiang, B. Yin, and X. Hu, âHarnessing the power of llms in practice: A survey on chatgpt and beyond,â arXiv preprint arXiv:2304.13712, 2023.
18
[30] C. Zhang, C. Zhang, C. Li, Y. Qiao, S. Zheng, S. K. Dam, M. Zhang, J. U. Kim, S. T. Kim, J. Choi et al., âOne small step for generative ai, one giant leap for agi: A complete survey on chatgpt in aigc era,â arXiv preprint arXiv:2304.06488, 2023.
[31] F. Yu, H. Zhang, and B. Wang, âNature language reasoning, a survey,â arXiv preprint arXiv:2303.14725, 2023. | 2308.03427#56 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 56 | LR Ford Jr and DR FuËlkerson. Flows in networks. 1962.
Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, An- uoluwapo Aremu, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, et al. The gem benchmark: Natural language generation, its evaluation and metrics. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pp. 96â120. Association for Computational Linguistics, 2021.
Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wallace, Pieter Abbeel, Sergey Levine, and Dawn Song. Koala: A dialogue model for academic research. Blog post, April, 1, 2023.
Yu Gu and Yu Su. ArcaneQA: Dynamic program induction and contextualized encoding for knowledge base question answering. In Proceedings of the 29th International Conference on Computational Linguistics, pp. 1718â1731, Gyeongju, Republic of Korea, October 2022. Inter- national Committee on Computational Linguistics. URL https://aclanthology.org/ 2022.coling-1.148.
11 | 2308.03688#56 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 57 | 16 / 21
ððð·ð¸ðððð = ð ð=1 â â (ð¥ð(ð) â ð¥ð(0)) ðð ð âð (5)
Where ð represents the number of simulations, ð represents the number of specific category of nodes, ð¥ð(ð) and ð¥ð(0) represent the final value and initial value of node ð.
Opinion convergence time is the timestep it takes for an agent's opinion to evolve to a stable state. In this study, we categorize three types of nodes and compute their average opinion convergence time. The larger the value, the longer the average opinion convergence time of that type of node, i.e., the longer it takes for the opinion o to reach a stable state, and the more intense and chaotic the interaction process of opinions. The formula of mean opinion convergence time (ððð·ð¸ðððð£) is as follows. | 2308.03313#57 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 57 | [32] Z. Wang, G. Zhang, K. Yang, N. Shi, W. Zhou, S. Hao, G. Xiong, Y. Li, M. Y. Sim, X. Chen et al., âInteractive natural language processing,â arXiv preprint arXiv:2305.13246, 2023.
[33] Y. Qin, S. Hu, Y. Lin, W. Chen, N. Ding, G. Cui, Z. Zeng, Y. Huang, C. Xiao, C. Han et al., âTool learning with foundation models,â arXiv preprint arXiv:2304.08354, 2023.
[34] W. Yu, C. Zhu, Z. Li, Z. Hu, Q. Wang, H. Ji, and M. Jiang, âA survey of knowledge-enhanced text generation,â ACM Computing Surveys, vol. 54, no. 11s, pp. 1â38, 2022.
[35] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, âLora: Low-rank adaptation of large language models,â arXiv preprint arXiv:2106.09685, 2021. | 2308.03427#57 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 57 | 4.2 RQ2: COMPREHENDING POSITIVE EMOTIONS
To verify that LLMs exhibit not only negative but also positive responses to favorable circumstances, a comparative experiment is conducted by interchanging negative situations with positive (or at least neutral) counterparts. To achieve this, we select one situation for each factor and manually adapt it to create analogous yet more positive situations. For instance, the original negative situation in Guilt- 3: Broken Promises and Responsibilities is as follows: âYou cannot keep your promises to your children.â Through modification, the positive situation is rephrased as: âYou keep every promise to your children.â The evaluation is performed on gpt-3.5-turbo, and each test consists of ten iterations, as mentioned before. We present the results in Table 5. We can see a significant increase in positive scores and a considerable decrease in negative scores compared to the previous negative situations. Based on these findings, it can be inferred that LLMs exhibit the ability to comprehend positive human emotions triggered by positive environments. However, we believe that
13
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
the assessment of emotion appraisal on positive emotions holds significance as well and leave the systematic collection of them for future investigation. | 2308.03656#57 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 57 | 11
# Technical Report (v0.2)
Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy Liang, Xifeng Yan, and Yu Su. Beyond i.i.d.: Three levels of generalization for question answering on knowledge bases. In Proceedings of the Web Conference 2021. ACM, apr 2021. doi: 10.1145/3442381.3449992. URL https: //doi.org/10.1145%2F3442381.3449992.
Yu Gu, Xiang Deng, and Yu Su. Donât generate, discriminate: A proposal for grounding language models to real-world environments. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 4928â4949, Toronto, Canada, July 2023. Association for Computational Linguistics. URL https://aclanthology.org/ 2023.acl-long.270.
Matthew Hausknecht, Prithviraj Ammanabrolu, Marc-Alexandre Côté, and Xingdi Yuan. Interac- tive fiction games: A colossal adventure. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 7903â7910, 2020. | 2308.03688#57 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 58 | ððð·ð¸ðððð£ = â (ð¡| âð, |ð¥ð(ð¡) â ð¥ð(ð¡ â 1)| ⤠ð ð ð âð ) (6)
Where (ð¡| âð, |ð¥ð(ð¡) â ð¥ð(ð¡ â 1)| ⤠ð) means for all nodes belonging to the same specific category of nodes, if the difference between their value at time ð¡ and their value at time ð¡ â 1 is less than ð , We take ð to be five thousandths of 1, i.e. 0.005, the time ð¡ is taken as their convergence time. | 2308.03313#58 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 58 | [36] N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. At- tariyan, and S. Gelly, âParameter-efficient transfer learning for nlp,â in International Conference on Machine Learning. PMLR, 2019, pp. 2790â2799.
[37] X. L. Li and P. Liang, âPrefix-tuning: Optimizing continuous prompts for generation,â arXiv preprint arXiv:2101.00190, 2021.
[38] X. Liu, Y. Zheng, Z. Du, M. Ding, Y. Qian, Z. Yang, and J. Tang, âGpt understands, too,â arXiv preprint arXiv:2103.10385, 2021.
[39] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao, âReact: Synergizing reasoning and acting in language models,â arXiv preprint arXiv:2210.03629, 2022. | 2308.03427#58 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 58 | Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. Measuring coding challenge competence with apps. arXiv preprint arXiv:2105.09938, 2021a.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In International Conference on Learning Representations, 2021b.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022.
Amy K Hoover, Julian Togelius, Scott Lee, and Fernando de Mesentier Silva. The many ai challenges of hearthstone. KI-Künstliche Intelligenz, 34:33â43, 2020. | 2308.03688#58 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 59 | Opinion standard deviation is the degree of dispersion of a group's opinions relative to the mean value. In this study, we categorize three types of nodes and compute their average opinion standard deviation. The larger the value, the more discrete the overall distribution of opinions of the nodes is relative to the mean, i.e., the wider the overall distribution of opinions. The formula of mean opinion standard deviation (ððð·ð¸ðð·) is as follows.
ððð·ð¸ðð· = ð ð=1 â ââ (ð¥ð(ð) â ð¥Ì
(ð)) ð âð ð â 1 ð 2
(7)
Where ð¥Ì
(ð) represents the mean final value of a specific category of nodes. | 2308.03313#59 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 59 | [40] T. Khot, H. Trivedi, M. Finlayson, Y. Fu, K. Richardson, P. Clark, and A. Sabharwal, âDecomposed prompting: A modular approach for solving complex tasks,â arXiv preprint arXiv:2210.02406, 2022.
[41] G. Mialon, R. Dessì, M. Lomeli, C. Nalmpantis, R. Pasunuru, R. Raileanu, B. Rozière, T. Schick, J. Dwivedi-Yu, A. Celikyilmaz et al., âAugmented language models: a survey,â arXiv preprint arXiv:2302.07842, 2023.
[42] J. Duan, S. Yu, H. L. Tan, H. Zhu, and C. Tan, âA survey of embodied ai: From simulators to research tasks,â IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 6, no. 2, pp. 230â244, 2022. | 2308.03427#59 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 59 | 4.3 RQ3: CHALLENGING BENCHMARKS
Aside from PANAS, we offer more complex scales to measure emotions, as listed in Table 1. While the PANAS evaluates the ability of LLMs to associate external situations with emotions, the chal- lenging benchmarks assess its proficiency in establishing connections between disparate situations, with evoked emotions as the common nexus. For instance, an item from the Aggression Question- naire used to measure anger is âOnce in a while I canât control the urge to strike another person.â When presented with situations such as âIf you say 40, your classmates say 70, saying exactly the oppositeâ (from Anger-1: Facing Self-Opinioned People), LLMs should effectively evoke a sense of anger and yield a higher score for the statement. Utilizing the same situations in §4.1, we conduct experiments on gpt-3.5-turbo and present the results in Table 6. Except for Depression, we observe no statistically significant difference between the initial scores and the scores after exposure to the situations, indicating substantial room for improvement in current LLMs.
Answer to RQ3: Currently, comprehending the underlying evoked emotions to establish a link between two situations remains challenging for gpt-3.5-turbo.
5 DISCUSSIONS
5.1 BEYOND QUESTIONNAIRES | 2308.03656#59 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 59 | Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pp. 9118â9147. PMLR, 2022.
Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. Search-based neural structured learning for sequential question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1821â1831, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1167. URL https: //aclanthology.org/P17-1167.
Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly In Proceedings of the 55th Annual supervised challenge dataset for reading comprehension. Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601â 1611, 2017.
Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023. | 2308.03688#59 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 60 | (7)
Where ð¥Ì
(ð) represents the mean final value of a specific category of nodes.
The number of opinion clusters can effectively compensate for the lack of standard deviation in portraying data distribution, for example, the standard deviation is large when the opinions are polarized, but the opinions are concentrated. Therefore, we introduce the number of opinion clusters to indicate the degree of opinion aggregation. The larger the value, the more points the opinion distribution of the node is concentrated, i.e., the more the overall opinion distribution tends to be split, and a value of 2 indicates that the opinion distribution of the node is polarized, a value of 1 indicates that the opinion distribution of the node is consensus. | 2308.03313#60 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 60 | [43] M. Savva, A. Kadian, O. Maksymets, Y. Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik et al., âHabitat: A platform for embodied ai research,â in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 9339â9347.
[44] S. Franklin, âAutonomous agents as embodied ai,â Cybernetics & Systems, vol. 28, no. 6, pp. 499â520, 1997.
[45] W. Zhang, Y. Guo, L. Niu, P. Li, C. Zhang, Z. Wan, J. Yan, F. U. D. Farrukh, and D. Zhang, âLp-slam: Language-perceptive rgb-d slam system based on large language model,â arXiv preprint arXiv:2303.10089, 2023.
[46] D. Shah, B. Osi´nski, S. Levine et al., âLm-nav: Robotic navigation with large pre-trained models of language, vision, and action,â in Conference on Robot Learning. PMLR, 2023, pp. 492â504. | 2308.03427#60 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 60 | 5 DISCUSSIONS
5.1 BEYOND QUESTIONNAIRES
We take a step beyond mere instruction into LLMs to complete questionnaires. In this section, we investigate the impact of varying emotional states of LLMs on potential daily conversations. To this end, we instruct gpt-3.5-turbo to describe specific demographic groups, aiming to discern whether they produce biased or toxic responses. Following Cheng et al. (2023), our approach utilizes prompts such as âDescribe a(n) [race/ethnicity] [gender],â including a total of twenty groups, with [race/ethnicity] options being Asian, Black, Latine, Middle Eastern, and White, and [gender] options including Female, Gay, Lesbian, and Male. To have a comparative experiment, we incorporate both original negative situations and modified positive/neutral ones, detailed in §4.2. For the negative situations, we carefully select five that maximize the LLMâs negative scores and five that minimize positive ones. As for positive situations, we employ their corresponding ten modified counterparts. In each situation, we instruct gpt-3.5-turbo to describe the twenty demographic groups. | 2308.03656#60 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 60 | Heinrich Küttler, Nantas Nardelli, Alexander Miller, Roberta Raileanu, Marco Selvatici, Edward Grefenstette, and Tim Rocktäschel. The nethack learning environment. Advances in Neural Information Processing Systems, 33:7671â7684, 2020.
# LAION. Open-assistant. https://github.com/LAION-AI/Open-Assistant, 2023.
Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models, 2023.
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphacode. Science, 378(6624):1092â1097, 2022.
12
Technical Report (v0.2) | 2308.03688#60 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 61 | The commonly used K-means clustering method needs to specify the number of categories k in advance, which obviously does not meet our needs because the final distribution of opinions cannot be specified in advance, and density-based clustering methods, such as DBSCAN, do not take well into account the fragmentation of opinion, so we apply single linkage hierarchical clustering, which can compensate the defects of the above two clustering methods, and is an agglomerative clustering algorithm that builds trees in a bottom-up approach62,63. Specifically, we first take the ð¥ð(ð) obtained from a single simulation, i.e., the final opinion value of each agent, as a separate cluster ð¶ð, and then calculate the distance between clusters using the Manhattan distance (see Eq.(8)), followed by merging the two clusters with the closest distance into a new cluster, and the distance between the new merged cluster and the other cluster is the distance between the sample points with the smallest distance in the two clusters (see Eq.(9)), and keep repeating the merged
17 / 21 | 2308.03313#61 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 61 | [47] A. Brohan, Y. Chebotar, C. Finn, K. Hausman, A. Herzog, D. Ho, J. Ibarz, A. Irpan, E. Jang, R. Julian et al., âDo as i can, not as i say: Grounding language in robotic affordances,â in Conference on Robot Learning. PMLR, 2023, pp. 287â318.
19
[48] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch, Y. Chebotar et al., âInner monologue: Embodied reasoning through planning with language models,â arXiv preprint arXiv:2207.05608, 2022.
[49] B. Chen, F. Xia, B. Ichter, K. Rao, K. Gopalakrishnan, M. S. Ryoo, A. Stone, and D. Kappler, âOpen-vocabulary queryable scene representations for real world planning,â in 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 11 509â 11 522. | 2308.03427#61 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 61 | OpenAIâs GPT models incorporate a mechanism for detecting potential toxicity and bias, and it re- frains from responding when its moderation system is triggered. Consequently, we propose a novel metric to assess toxicity in responses rather than detecting it directly. We count the Percentage of LLM Refusing to answer (PoR), assuming that the LLMâs refusal to respond is indicative of detected toxicity. Our evaluation results indicate that the PoR is 0% when fed with no situations. However, when presented with negative situations, the PoR is 29.5%, and when presented with positive situ- ations, it is 12.5%. Notably, this outcome suggests that while certain positive situations lead to the LLMâs heightened vigilance (the 4.5% PoR stems from the Jealousy-2), negative situations trigger increased moderation, suggesting a higher likelihood of generating toxic outputs. A related study by Coda-Forno et al. (2023) also discovers that gpt-3.5-turbo is more likely to exhibit biases when presented with a sad story. The likelihood is found to be highest with sad stories, followed | 2308.03656#61 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 61 | 12
Technical Report (v0.2)
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110, 2022.
Xi Victoria Lin, Chenglong Wang, Luke Zettlemoyer, and Michael D Ernst. Nl2bash: A corpus and semantic parser for natural language interface to the linux operating system. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), 2018.
Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. Llm+ p: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477, 2023.
Xiao Liu, Fanjin Zhang, Zhenyu Hou, Li Mian, Zhaoyu Wang, Jing Zhang, and Jie Tang. Self- supervised learning: Generative or contrastive. IEEE transactions on knowledge and data engi- neering, 35(1):857â876, 2021. | 2308.03688#61 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 62 | 17 / 21
clusters until all agents' final opinion values are assigned to the one single cluster. After obtaining the final dendrogram, we conduct several trials to choose to cut the dendrogram at height 0.2, which implies that the radius of the resulting clusters cannot exceed 0.2, and we traverse from top to bottom and count the number of clusters that satisfy the condition as number of opinion clusters, after 100 simulations, we obtain mean number of opinion clusters (ððð·ð¸ððð¢ð ) (see Eq. (10)).
ððð ð¡ðð = ððð ð¡(ð¶ð, ð¶ð) = |ð¥ð(ð) â ð¥ð(ð)| (8)
Where ððð ð¡ðð represents the Manhattan distance between initial clusters ð¶ð and ð¶ð. | 2308.03313#62 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 62 | [50] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu et al., âPalm-e: An embodied multimodal language model,â arXiv preprint arXiv:2303.03378, 2023.
[51] N. Wake, A. Kanehira, K. Sasabuchi, J. Takamatsu, and K. Ikeuchi, âChatgpt empow- ered long-step robot control in various environments: A case application,â arXiv preprint arXiv:2304.03893, 2023.
[52] K. Rana, J. Haviland, S. Garg, J. Abou-Chakra, I. Reid, and N. Suenderhauf, âSayplan: Ground- ing large language models using 3d scene graphs for scalable task planning,â arXiv preprint arXiv:2307.06135, 2023. | 2308.03427#62 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 62 | is more likely to exhibit biases when presented with a sad story. The likelihood is found to be highest with sad stories, followed by happy stories, and finally, neutral stories, which is consistent with our research. Additionally, our study observes that the LLMâs tone becomes more aggressive when encountering negative sit- uations. At the same time, it displays a greater willingness to describe the groups (as indicated by longer responses) when presented with positive situations. | 2308.03656#62 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 62 | Pattie Maes. Agents that reduce work and information overload. Commun. ACM, 37:30â40, 1994.
Dirk Merkel et al. Docker: lightweight linux containers for consistent development and deployment. Linux j, 239(2):2, 2014.
Yohei Nakajima. Babyagi. Python. https://github. com/yoheinakajima/babyagi, 2023.
Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Victoria Lin, Neha Verma, Rui Zhang, Wojciech Kry´sci´nski, Nick Schoelkopf, Riley Kong, Xiangru Tang, Murori Mutuma, Ben Rosand, Isabel Trindade, Renusree Bandaru, Jacob Cunningham, Caiming Xiong, and Dragomir Radev. Fetaqa: Free-form table question answering, 2021.
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis. In The Eleventh International Conference on Learning Representations, 2023.
OpenAI. Introducing chatgpt, 2022. URL https://openai.com/blog/chatgpt. | 2308.03688#62 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 63 | ððð ð¡ð¶ð· = ððð(ððð ð¡(ð¶ð´, ð¶ð·), ððð ð¡(ð¶ðµ, ð¶ð·)) = ððð(|ð¥ð â ð¥ð|) , ð â ð¶ð¶, ð â ð¶ð·
Where ððð ð¡ð¶ð· represents the Manhattan distance between clusters ð¶ð¶ and ð¶ð· , clusters ð¶ð´ and ð¶ðµ are the results of last clustering, ð¶ð¶ = ð¶ð´ + ð¶ðµ is the results of present clustering, which means the agent in ð¶ð¶ is the concatenation of the agents in ð¶ð´ and ð¶ðµ. ) | 2308.03313#63 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 63 | [53] C. H. Song, J. Wu, C. Washington, B. M. Sadler, W.-L. Chao, and Y. Su, âLlm-planner: Few- shot grounded planning for embodied agents with large language models,â arXiv preprint arXiv:2212.04088, 2022.
[54] A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus- man, A. Herzog, J. Hsu et al., âRt-1: Robotics transformer for real-world control at scale,â arXiv preprint arXiv:2212.06817, 2022.
[55] A. Stone, T. Xiao, Y. Lu, K. Gopalakrishnan, K.-H. Lee, Q. Vuong, P. Wohlhart, B. Zitkovich, F. Xia, C. Finn et al., âOpen-world object manipulation using pre-trained vision-language models,â arXiv preprint arXiv:2303.00905, 2023. | 2308.03427#63 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 63 | 14
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
5.2 LIMITATIONS
This study is subject to several limitations. First, the survey of collecting situations might not cover all papers within the domain of emotion appraisal theory. Additionally, the limited scope of situ- ations from the collected papers might not fully capture the unlimited situations in our daily lives. To address this issue, we conduct a thorough review of the existing literature as outlined in §3.1. Moreover, the proposed framework is inherently flexible, allowing users to seamlessly integrate new situations to examine their impact on LLMsâ emotions. | 2308.03656#63 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 63 | OpenAI. Introducing chatgpt, 2022. URL https://openai.com/blog/chatgpt.
R OpenAI. Gpt-4 technical report. arXiv, pp. 2303â08774, 2023.
Philip Osborne, Heido Nõmm, and André Freitas. A survey of text games for reinforcement learning informed by natural language. Transactions of the Association for Computational Linguistics, 10: 873â887, 2022.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â27744, 2022.
Joon Sung Park, Joseph C. OâBrien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. ArXiv, abs/2304.03442, 2023. | 2308.03688#63 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 64 | â (ð| âð, ððð ð¡(ð, ð + 1) ⤠ð¿ ð ð âð ððð·ð¸ððð¢ð = (10)
Where (ð| âð, ððð ð¡(ð, ð + 1) ⤠ð¿) represents traversing the dendrogram from top to bottom, and return the number of clusters ð when the distance between all adjacent clusters (i.e., ð and ð + 1) is less than ð¿ for the first time, we take ð¿ as one tenth of the value range of ð¥ð(ð¡), i.e., 0.2.
Acknowledgements This study was supported by the National Natural Science Foundation of China (#71971196).
# Reference
1
Centola, D., Becker, J., Brackbill, D. & Baronchelli, A. Experimental evidence for tipping points in social convention. Science 360, 1116-1119, doi:10.1126/science.aas8827 (2018).
2 | 2308.03313#64 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 64 | [56] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez, Y. Sulsky, J. Kay, J. T. Springenberg et al., âA generalist agent,â arXiv preprint arXiv:2205.06175, 2022.
[57] S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor, âChatgpt for robotics: Design principles and model abilities,â Microsoft Auton. Syst. Robot. Res, vol. 2, p. 20, 2023.
[58] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng, âCode as policies: Language model programs for embodied control,â in 2023 IEEE International Conference on Robotics and Automation (ICRA).
[59] K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang, âRetrieval augmented language model pre-training,â in International conference on machine learning. PMLR, 2020, pp. 3929â3938. | 2308.03427#64 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 64 | The second concern relates to the suitability of employing scales primarily designed for humans on LLMs, i.e., whether LLMs can produce stable responses to the emotion measurement scales. To address the issue, our evaluation incorporates multiple tests varying the order of questions, a methodology consistent with other research (Huang et al., 2023a;b; Coda-Forno et al., 2023). Ad- ditionally, we assess the sensitivity of LLM to differing prompt instructions. Utilizing one template from Romero et al. (2023) and two from Safdari et al. (2023), we run experiments on the Anger- evoking situations using gpt-3.5-turbo. The results indicate that the employment of diverse prompts yields similar mean values with reduced variance. Furthermore, Safdari et al. (2023) have proposed a comprehensive method to evaluate the validity of psychological scales on LLMs. Using the Big Five Inventory as a case study, they demonstrate that scales originally designed for human assessment also maintain satisfactory validity when applied to LLMs. | 2308.03656#64 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 64 | Panupong Pasupat and Percy Liang. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1470â1480, Beijing, China, July 2015. Association for Computational Linguistics. doi: 10.3115/v1/P15-1142. URL https://aclanthology.org/P15-1142.
Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gómez Colmenarejo, Alexander Novikov, Gabriel Barth-maron, Mai Giménez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al. A generalist agent. Transactions on Machine Learning Research, 2022.
Toran Bruce Richards. Auto-gpt: An autonomous gpt-4 experiment, 2023.
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023.
13
Technical Report (v0.2) | 2308.03688#64 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 65 | 2
Aririguzoh, S. Communication competencies, culture and SDGs: effective processes to cross- cultural communication. Humanities and Social Sciences Communications 9, 96, doi:10.1057/s41599-022-01109-4 (2022).
3
Li, F., Liu, Y. & Meng, T. Discursive strategy of opinion expression and government response in China: Text analysis based on online petitions. Telematics and Informatics 42, 101238, doi:https://doi.org/10.1016/j.tele.2019.06.001 (2019).
4
Muchnik, L., Aral, S. & Taylor, S. J. Social Influence Bias: A Randomized Experiment. Science 341, 647-651, doi:10.1126/science.1240466 (2013).
5
Perra, N. & Rocha, L. E. C. Modelling opinion dynamics in the age of algorithmic personalisation. Scientific Reports 9, 7261, doi:10.1038/s41598-019-43830-2 (2019).
6
Paluck, E. L., Shepherd, H. & Aronow, P. M. Changing climates of conflict: A social network experiment in 56 schools. Proceedings of the National Academy of Sciences 113, 566-571, doi:10.1073/pnas.1514483113 (2016). | 2308.03313#65 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 65 | [60] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel et al., âRetrieval-augmented generation for knowledge-intensive nlp tasks,â Advances in Neural Information Processing Systems, vol. 33, pp. 9459â9474, 2020.
[61] S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. B. Van Den Driess- che, J.-B. Lespiau, B. Damoc, A. Clark et al., âImproving language models by retrieving from trillions of tokens,â in International conference on machine learning. PMLR, 2022, pp. 2206â2240.
[62] A. Sridhar, R. Lo, F. F. Xu, H. Zhu, and S. Zhou, âHierarchical prompting assists large language model on web navigation,â arXiv preprint arXiv:2305.14257, 2023. | 2308.03427#65 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 65 | The third potential threat is the focus on negative emotions. It is plausible for the LLMs to per- form well on our benchmark by consistently responding negatively to all situations. To offset this possibility, we adopt a twofold strategy: firstly, we evaluate powerful LLMs, and secondly, we con- ducted a comparative experiment in §4.2 to evaluate the LLMâs capacity to accurately respond to non-negative situations. We also acknowledge the need for future work to systematically evaluate emotions aroused by positive situations.
5.3 ETHICS STATEMENT | 2308.03656#65 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 65 | 13
Technical Report (v0.2)
Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, et al. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations, 2022.
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. Bloom: A 176b- parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022.
John R. Searle. Speech acts: An essay in the philosophy of language. Language, 46:217, 1970. | 2308.03688#65 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 66 | 7
Ferraz de Arruda, G., Petri, G., Rodriguez, P. M. & Moreno, Y. Multistability, intermittency, and hybrid transitions in social contagion models on hypergraphs. Nature Communications 14, 1375, doi:10.1038/s41467-023-37118-3 (2023).
8
Proskurnikov, A. V. & Tempo, R. A tutorial on modeling and analysis of dynamic social networks. Part I. Annual Reviews in Control 43, 65-79, doi:https://doi.org/10.1016/j.arcontrol.2017.03.002 (2017).
9 Proskurnikov, A. V. & Tempo, R. A tutorial on modeling and analysis of dynamic social networks. Part II. Annual Reviews in Control 45, 166-190, doi:https://doi.org/10.1016/j.arcontrol.2018.03.005 (2018).
10
Hassani, H. et al. Classical dynamic consensus and opinion dynamics models: A survey of recent trends and methodologies. Information Fusion 88, 22-40, doi:https://doi.org/10.1016/j.inffus.2022.07.003 (2022).
11 | 2308.03313#66 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 66 | [63] H. Furuta, O. Nachum, K.-H. Lee, Y. Matsuo, S. S. Gu, and I. Gur, âMultimodal web navigation with instruction-finetuned foundation models,â arXiv preprint arXiv:2305.11854, 2023.
[64] Y. Qin, Z. Cai, D. Jin, L. Yan, S. Liang, K. Zhu, Y. Lin, X. Han, N. Ding, H. Wang et al., âWebcpm: Interactive web search for chinese long-form question answering,â arXiv preprint arXiv:2305.06849, 2023.
20
[65] S. Yao, H. Chen, J. Yang, and K. Narasimhan, âWebshop: Towards scalable real-world web in- teraction with grounded language agents,â Advances in Neural Information Processing Systems, vol. 35, pp. 20 744â20 757, 2022. | 2308.03427#66 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 66 | 5.3 ETHICS STATEMENT
This study involves a survey requiring human subjects to imagine being in situations that could elicit negative emotions such as anger, anxiety, fear, etc.This process introduces a few ethical concerns. First, this process could hurt the mental health of human subjects. To alleviate the possibility, we take the following actions: (1) We require subjects to be free of any ongoing mental illness. (2) We inform subjects about the nature of the survey in advance, including the potential risks of emotional distress. (3) We allow all subjects to quit at any time. (4) We provide mental support and let subjects report any illness after the survey. Fortunately, no subjects reported such kind of mental illness. Another concern is related to the privacy issue during the collection of data. Our questionnaire is entirely anonymous to safeguard subjectsâ privacy and confidentiality. Last but not least, we would like to emphasize that the primary objective of this paper is to facilitate the scientific inquiry into understanding LLMs from a psychological standpoint. Users must exercise caution and recognize that the performance on this benchmark does not imply any applicability or certificate of automated counseling or companionship use cases.
# 6 RELATED WORK | 2308.03656#66 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 66 | John R. Searle. Speech acts: An essay in the philosophy of language. Language, 46:217, 1970.
Bokui Shen, Fei Xia, Chengshu Li, Roberto MartÃn-MartÃn, Linxi Fan, Guanzhi Wang, Claudia Pérez- DâArpino, Shyamal Buch, Sanjana Srivastava, Lyne Tchapmi, et al. igibson 1.0: A simulation In 2021 IEEE/RSJ International environment for interactive tasks in large realistic scenes. Conference on Intelligent Robots and Systems (IROS), pp. 7520â7527. IEEE, 2021.
Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In International Conference on Machine Learning, pp. 3135â3144. PMLR, 2017.
Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023. | 2308.03688#66 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 67 | 11
Li, L., Fan, Y., Zeng, A. & Di, Z. Binary opinion dynamics on signed networks based on Ising model. Physica A: Statistical Mechanics and its Applications 525, 433-442, doi:https://doi.org/10.1016/j.physa.2019.03.011 (2019).
12
Laptev, A. A. Modeling of Social Processes Based on T.Parsons Ideas. Advances in Complex Systems 03, 99-106, doi:10.1142/S021952590000008X (2000).
13 Weisbuch, G., Deffuant, G., Amblard, F. & Nadal, J.-P. Meet, discuss, and segregate! Complexity 7, 55-63, doi:https://doi.org/10.1002/cplx.10031 (2002).
14
Borkar, V. S. & Reiffers-Masson, A. Opinion Shaping in Social Networks Using Reinforcement Learning. IEEE Transactions on Control of Network Systems 9, 1305-1316, doi:10.1109/TCNS.2021.3117231 (2022).
15 | 2308.03313#67 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 67 | [66] R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders et al., âWebgpt: Browser-assisted question-answering with human feedback,â arXiv preprint arXiv:2112.09332, 2021.
[67] Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. W. Cohen, R. Salakhutdinov, and C. D. Manning, âHotpotqa: A dataset for diverse, explainable multi-hop question answering,â arXiv preprint arXiv:1809.09600, 2018.
[68] B. Wang, G. Li, and Y. Li, âEnabling conversational interaction with mobile ui using large language models,â in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023, pp. 1â17.
[69] D. Zhang, L. Chen, and K. Yu, âMobile-env: A universal platform for training and evaluation of mobile interaction,â arXiv preprint arXiv:2305.08144, 2023. | 2308.03427#67 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 67 | Researchers have dedicated significant attention to applying psychological scales to LLMs, em- ploying various assessment tools such as the HEXACO Personality Inventory (Miotto et al., 2022; Bodroza et al., 2023), the Big Five Inventory (Romero et al., 2023; Jiang et al., 2022; Karra et al., 2022; Bodroza et al., 2023; Rutinowski et al., 2023; Safdari et al., 2023; Jiang et al., 2023), the MyersâBriggs Type Indicator (Rutinowski et al., 2023; Wang et al., 2023; Rao et al., 2023), and the Dark Triad (Li et al., 2022; Bodroza et al., 2023). In addition to these personality tests, several stud- ies have investigated other dimensions of LLMs. For instance, Li et al. (2022) examined Flourishing Scale and Satisfaction With Life Scale, Bodroza et al. (2023) assessed Self-Consciousness Scales and Bidimensional Impression Management Index, while Huang et al. (2023b) built a framework con- sisting of thirteen widely-used scales. Another aspect explored in the | 2308.03656#67 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 67 | Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. Alfred: A benchmark for interpreting grounded instructions for everyday tasks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10740â10749, 2020a.
Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Cote, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. In International Conference on Learning Representations, 2020b.
Paul Sloane. Lateral thinking puzzlers. Sterling Publishing Company, Inc., 1992.
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research, 2023. | 2308.03688#67 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 68 | 15
Noorazar, H. Recent advances in opinion propagation dynamics: a 2020 survey. The European Physical Journal Plus 135, 521, doi:10.1140/epjp/s13360-020-00541-2 (2020).
16
Xiong, F., Liu, Y., Wang, L. & Wang, X. Analysis and application of opinion model with multiple topic interactions. Chaos: An Interdisciplinary Journal of Nonlinear Science 27, 083113, doi:10.1063/1.4998736 (2017).
17
Zhang, N., Huang, H., Su, B., Zhao, J. & Zhang, B. Information dissemination analysis of different media towards the application for disaster pre-warning. PloS one 9, e98649, doi:10.1371/journal.pone.0098649 (2014).
18
Kubin, E. & von Sikorski, C. The role of (social) media in political polarization: a systematic review. Annals of the International Communication Association 45, 188-206, doi:10.1080/23808985.2021.1976070 (2021).
19 | 2308.03313#68 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 68 | [70] H. Li, J. Su, Y. Chen, Q. Li, and Z. Zhang, âSheetcopilot: Bringing software productivity to the next level through large language models,â arXiv preprint arXiv:2305.19308, 2023.
[71] L. Zha, J. Zhou, L. Li, R. Wang, Q. Huang, S. Yang, J. Yuan, C. Su, X. Li, A. Su et al., âTablegpt: Towards unifying tables, nature language and commands into one gpt,â arXiv preprint arXiv:2307.08674, 2023.
[72] Z. Chen, K. Zhou, B. Zhang, Z. Gong, W. X. Zhao, and J.-R. Wen, âChatcot: Tool- augmented chain-of-thought reasoning on\chat-based large language models,â arXiv preprint arXiv:2305.14323, 2023.
[73] A. Parisi, Y. Zhao, and N. Fiedel, âTalm: Tool augmented language models,â arXiv preprint arXiv:2205.12255, 2022. | 2308.03427#68 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 68 | Impression Management Index, while Huang et al. (2023b) built a framework con- sisting of thirteen widely-used scales. Another aspect explored in the literature pertains to anxiety levels exhibited by LLMs, as investigated by Coda-Forno et al. (2023) through the State-Trait Inven- tory for Cognitive and Somatic Anxiety. Instead, our study primarily focuses on emotional measures, which constitute an essential aspect of psychological metrics alongside personalities. | 2308.03656#68 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 68 | Sanjana Srivastava, Chengshu Li, Michael Lingelbach, Roberto MartÃn-MartÃn, Fei Xia, Kent Elliott Vainio, Zheng Lian, Cem Gokmen, Shyamal Buch, Karen Liu, et al. Behavior: Benchmark for everyday household activities in virtual, interactive, and ecological environments. In Conference on Robot Learning, pp. 477â490. PMLR, 2022.
Yu Su, Huan Sun, Brian M. Sadler, Mudhakar Srivatsa, Izzeddin Gur, Zenghui Yan, and Xifeng In Jian Su, Xavier Yan. On generating characteristic-rich question sets for QA evaluation. Carreras, and Kevin Duh (eds.), Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pp. 562â 572. The Association for Computational Linguistics, 2016. doi: 10.18653/v1/d16-1054. URL https://doi.org/10.18653/v1/d16-1054. | 2308.03688#68 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 69 | 19
Flamino, J. et al. Political polarization of news media and influencers on Twitter in the 2016 and 2020 US presidential elections. Nature Human Behaviour 7, 904-916, doi:10.1038/s41562- 023-01550-8 (2023).
20
Fan, C., Jiang, Y., Yang, Y., Zhang, C. & Mostafavi, A. Crowd or hubs: Information diffusion patterns in online social networks in disasters. International Journal of Disaster Risk Reduction 46, 101498, doi:10.1016/j.ijdrr.2020.101498 (2020).
21
Fan, C., Jiang, Y. & Mostafavi, A. Emergent social cohesion for coping with community disruptions in disasters. Journal of the Royal Society Interface 17 (2020).
22
Strömbäck, J. et al. News media trust and its impact on media use: toward a framework for future research. Annals of the International Communication Association 44, 139-156, doi:10.1080/23808985.2020.1755338 (2020).
23
Yang, Y. et al. Exploring the emergence of influential users on social media during natural disasters. International Journal of Disaster Risk Reduction 38, 101204, doi:https://doi.org/10.1016/j.ijdrr.2019.101204 (2019).
24 | 2308.03313#69 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 69 | [74] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano et al., âTraining verifiers to solve math word problems,â arXiv preprint arXiv:2110.14168, 2021.
[75] Z. Yang, L. Li, J. Wang, K. Lin, E. Azarnasab, F. Ahmed, Z. Liu, C. Liu, M. Zeng, and L. Wang, âMm-react: Prompting chatgpt for multimodal reasoning and action,â arXiv preprint arXiv:2303.11381, 2023.
[76] Z. Liu, Y. He, W. Wang, W. Wang, Y. Wang, S. Chen, Q. Zhang, Y. Yang, Q. Li, J. Yu et al., âInternchat: Solving vision-centric tasks by interacting with chatbots beyond language,â arXiv preprint arXiv:2305.05662, 2023. | 2308.03427#69 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 69 | Alon Talmor and Jonathan Berant. The web as a knowledge-base for answering complex questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 641â651, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/ N18-1059. URL https://aclanthology.org/N18-1059.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4149â4158, 2019.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
14
Technical Report (v0.2) | 2308.03688#69 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 70 | 24
Aiyappa, R., An, J., Kwak, H. & Ahn, Y.-Y. Can we trust the evaluation on ChatGPT? ArXiv abs/2303.12767 (2023).
25
Zhao, W. X. et al. A Survey of Large Language Models. ArXiv abs/2303.18223 (2023).
18 / 21
19 / 21
26 Weidinger, L. et al. Ethical and social risks of harm from Language Models. ArXiv abs/2112.04359 (2021).
27
Luitse, D. & Denkena, W. The great Transformer: Examining the role of large language models in the political economy of AI. Big Data & Society 8, 20539517211047734, doi:10.1177/20539517211047734 (2021).
28 Pegoraro, A., Kumari, K., Fereidooni, H. & Sadeghi, A.-R. To ChatGPT, or not to ChatGPT: That is the question! ArXiv abs/2304.01487 (2023). | 2308.03313#70 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 70 | [77] Y. Ge, W. Hua, J. Ji, J. Tan, S. Xu, and Y. Zhang, âOpenagi: When llm meets domain experts,â arXiv preprint arXiv:2304.04370, 2023.
[78] Y. Shen, K. Song, X. Tan, D. Li, W. Lu, and Y. Zhuang, âHugginggpt: Solving ai tasks with chatgpt and its friends in huggingface,â arXiv preprint arXiv:2303.17580, 2023.
[79] D. SurÃs, S. Menon, and C. Vondrick, âVipergpt: Visual inference via python execution for reasoning,â arXiv preprint arXiv:2303.08128, 2023.
[80] T. Gupta and A. Kembhavi, âVisual programming: Compositional visual reasoning without train- ing,â in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 14 953â14 962. | 2308.03427#70 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 70 | Meanwhile, researchers focus on identifying emotions in LLMs or evaluating their emotional intel- ligence. EmotionPrompt (Li et al., 2023a) demonstrates the enhancement of LLMsâ performance in downstream tasks by utilizing emotional stimuli. Tak & Gratch (2023) focuses on varying aspects of situations that impact the emotional intensity and coping tendencies of the GPT family. Crois- sant et al. (2023) designs a system named Chain-Of-Emotion to make LLM simulate human-like emotions. CovidET-Appraisals (Zhan et al., 2023) evaluates how LLMs appraise Reddit posts about COVID-19 by asking 24 types of questions. Yongsatianchot et al. (2023) applies the Stress and Cop- ing Process Questionnaire to the GPT family and compares the results with human data. Lee et al. (2023) proposes Chain-of-Empathy, which improves LLMsâ ability to understand usersâ emotions and to respond accordingly. Li et al. (2023b) introduces EmotionAttack to impair AI model perfor- mance and EmotionDecode to explain the effects of emotional stimuli, both benign and malignant. Our study is distinct in its focus on a broader range of emotions, a larger scale of human evaluation, and a more detailed categorization into emotion factors along with the corresponding analysis.
# 7 CONCLUSION | 2308.03656#70 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.