doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.03688 | 70 | 14
Technical Report (v0.2)
Daniel Toyama, Philippe Hamel, Anita Gergely, Gheorghe Comanici, Amelia Glaese, Zafarali Ahmed, Tyler Jackson, Shibl Mourad, and Doina Precup. Androidenv: A reinforcement learning platform for android. arXiv preprint arXiv:2105.13231, 2021.
Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, and Jason Weston. Learning to speak and act in a fantasy text adventure game. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 673â683, 2019.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32, 2019. | 2308.03688#70 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 71 | 29 Eysenbach, G. The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers. JMIR Med Educ 9, e46885, doi:10.2196/46885 (2023).
30 Bostrom, N. Information Hazards: A Typology of Potential Harms from Knowledge. 10 (2012).
31 Craft, J. T., Wright, K. E., Weissler, R. E. & Queen, R. M. Language and Discrimination: Generating Meaning, Perceiving Identities, and Discriminating Outcomes. Annual Review of Linguistics 6, 389-407, doi:10.1146/annurev-linguistics-011718-011659 (2020).
32 McKee, K., Bai, X. & Fiske, S. Understanding Human Impressions of Artificial Intelligence. (2021).
33 Talboy, A. N. & Fuller, E. Challenging the appearance of machine intelligence: Cognitive bias in LLMs. ArXiv abs/2304.01358 (2023).
34
West, J. D. & Bergstrom, C. T. Misinformation in and about science. Proceedings of the National Academy of Sciences 118, e1912444117, doi:10.1073/pnas.1912444117 (2021). | 2308.03313#71 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 71 | [81] L. Chen, B. Li, S. Shen, J. Yang, C. Li, K. Keutzer, T. Darrell, and Z. Liu, âLanguage models are visual reasoning coordinators,â in ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models, 2023.
[82] P. Lu, B. Peng, H. Cheng, M. Galley, K.-W. Chang, Y. N. Wu, S.-C. Zhu, and J. Gao, âChameleon: Plug-and-play compositional reasoning with large language models,â arXiv preprint arXiv:2304.09842, 2023.
21
[83] Z. Gou, Z. Shao, Y. Gong, Y. Shen, Y. Yang, N. Duan, and W. Chen, âCritic: Large language models can self-correct with tool-interactive critiquing,â arXiv preprint arXiv:2305.11738, 2023. | 2308.03427#71 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 71 | # 7 CONCLUSION
We set up a concept of emotional robustness of LLMs in this study. Focusing on eight negative emotions, we conduct a comprehensive survey in the emotion appraisal theory of psychology. We collect 428 distinct situations which are categorized into 36 factors. We distribute questionnaires among a diverse crowd to establish human baselines for emotional responses to particular situations, ultimately garnering 1,266 valid responses.
Our evaluation of five models indicates that LLMs generally demonstrate appropriate emotional re- sponses to given situations. Also, different models show different intensities of emotion appraisals for the same situations. However, none of the models exhibit strong alignment with human refer- ences at the current stage. Notably, gpt-3.5-turbo demonstrates the highest alignment in the scores after imagining being in the situations. As for LLaMA-2 models, we find that the larger model exhibits a stronger comprehension of human emotions. Finally, we discover that gpt-3.5-turbo faces challenges in accurately reflecting its emotional changes in questionnaires containing complex situations, as opposed to straightforward emotions. In conclusion, current LLMs still have consid- erable room for improvement. We believe our framework can provide valuable insights into the development of LLMs, ultimately enhancing its human-like emotional understanding.
# REFERENCES
Magda B Arnold. Emotion and personality. 1960. | 2308.03656#71 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 71 | Guan Wang, Sijie Cheng, Xianyuan Zhan, Xiangang Li, Sen Song, and Yang Liu. Openchat: Advancing open-source language models with mixed-quality data, 2023a.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi (Jim) Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. ArXiv, abs/2305.16291, 2023b.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, 2023c.
Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. arXiv preprint arXiv:2302.01560, 2023d. | 2308.03688#71 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 72 | 35 Skitka, L. J., Mosier, K. L. & Burdick, M. Does automation bias decision-making? International Journal of Human-Computer Studies 51, 991-1006, doi:https://doi.org/10.1006/ijhc.1999.0252 (1999).
36 Piloto, L. S., Weinstein, A., Battaglia, P. & Botvinick, M. Intuitive physics learning in a deep- learning model inspired by developmental psychology. Nature Human Behaviour 6, 1257-1267, doi:10.1038/s41562-022-01394-8 (2022).
37 Smith, B. C. The Promise of Artificial Intelligence: Reckoning and Judgment. (The MIT Press, 2019).
38 Salewski, L., Alaniz, S., Rio-Torto, I., Schulz, E. & Akata, Z. In-Context Impersonation Reveals Large Language Models' Strengths and Biases. ArXiv abs/2305.14930 (2023).
39 Ousidhoum, N. D., Zhao, X., Fang, T., Song, Y. & Yeung, D.-Y. in Annual Meeting of the Association for Computational Linguistics. | 2308.03313#72 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 72 | [84] Y. Liang, C. Wu, T. Song, W. Wu, Y. Xia, Y. Liu, Y. Ou, S. Lu, L. Ji, S. Mao et al., âTaskmatrix. ai: Completing tasks by connecting foundation models with millions of apis,â arXiv preprint arXiv:2303.16434, 2023.
[85] S. Hao, T. Liu, Z. Wang, and Z. Hu, âToolkengpt: Augmenting frozen language models with massive tools via tool embeddings,â arXiv preprint arXiv:2305.11554, 2023.
[86] B. Paranjape, S. Lundberg, S. Singh, H. Hajishirzi, L. Zettlemoyer, and M. T. Ribeiro, âArt: Automatic multi-step reasoning and tool-use for large language models,â arXiv preprint arXiv:2303.09014, 2023.
[87] G. Kim, P. Baldi, and S. McAleer, âLanguage models can solve computer tasks,â arXiv preprint arXiv:2303.17491, 2023. | 2308.03427#72 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 72 | # REFERENCES
Magda B Arnold. Emotion and personality. 1960.
Willem A Arrindell, Paul MG Emmelkamp, et al. Phobic dimensions: I. reliability and gener- alizability across samples, gender and nations: The fear survey schedule (fss-iii) and the fear questionnaire (fq). Advances in Behaviour Research and Therapy, 6(4):207â253, 1984.
Aaron T Beck, Robert A Steer, and Gregory Brown. Beck depression inventoryâii. Psychological assessment, 1996.
Chantal Berna, Tamara J Lang, Guy M Goodwin, and Emily A Holmes. Developing a measure of interpretation bias for depressed mood: An ambiguous scenarios test. Personality and Individual Differences, 51(3):349â354, 2011.
Marcel Binz and Eric Schulz. Turning large language models into cognitive models. arXiv preprint arXiv:2306.03917, 2023.
D Caroline Blanchard, April L Hynd, Karl A Minke, Tiffanie Minemoto, and Robert J Blanchard. Human defensive behaviors to threat scenarios show parallels to fear-and anxiety-related defense patterns of non-human mammals. Neuroscience & Biobehavioral Reviews, 25(7-8):761â770, 2001. | 2308.03656#72 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 72 | Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2022a.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â24837, 2022b.
Michael Wooldridge and Nicholas R Jennings. Intelligent agents: Theory and practice. The knowledge engineering review, 10(2):115â152, 1995.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023.
John Yang, Akshara Prabhakar, Karthik Narasimhan, and Shunyu Yao. Intercode: Standardizing and benchmarking interactive coding with execution feedback. arXiv preprint arXiv:2306.14898, 2023. | 2308.03688#72 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 73 | 40 Venkit, P., Gautam, S., Panchanadikar, R., Huang, T.-H. K. & Wilson, S. in Conference of the European Chapter of the Association for Computational Linguistics.
41
41 Rutinowski, J., Franke, S., Endendyk, J., Dormuth, I. & Pauly, M. The Self-Perception and Political Biases of ChatGPT. ArXiv abs/2304.07333 (2023).
42 Hegselmann, R. & Krause, U. Opinion dynamics and bounded confidence: models, analysis and simulation. J. Artif. Soc. Soc. Simul. 5 (2002).
43 Peralta, A. F., Kertész, J. & Iñiguez, G. Opinion dynamics in social networks: From models to data. arXiv preprint arXiv:2201.01322 (2022).
44 Anderson, B. D., Dabbene, F., Proskurnikov, A. V., Ravazzi, C. & Ye, M. Dynamical networks of social influence: Modern trends and perspectives. IFAC-PapersOnLine 53, 17616-17627 (2020).
20 / 21 | 2308.03313#73 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 73 | [88] T. Cai, X. Wang, T. Ma, X. Chen, and D. Zhou, âLarge language models as tool makers,â arXiv preprint arXiv:2305.17126, 2023.
[89] R. H. Lewis and J. Jiao, âComputegpt: A computational chat model for numerical problems,â arXiv preprint arXiv:2305.06223, 2023.
[90] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig, âPal: Program- aided language models,â in International Conference on Machine Learning. PMLR, 2023, pp. 10 764â10 799.
[91] G. Wang, Y. Xie, Y. Jiang, A. Mandlekar, C. Xiao, Y. Zhu, L. Fan, and A. Anandkumar, âVoyager: An open-ended embodied agent with large language models,â arXiv preprint arXiv:2305.16291, 2023. | 2308.03427#73 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 73 | Bojana Bodroza, Bojana M Dinic, and Ljubisa Bojic. Personality testing of gpt-3: Limited temporal reliability, but highlighted social desirability of gpt-3âs personality instruments results. arXiv preprint arXiv:2306.04308, 2023.
16
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Arnold H Buss and Mark Perry. The aggression questionnaire. Journal of personality and social psychology, 63(3):452, 1992.
Marco Cascella, Jonathan Montomoli, Valentina Bellini, and Elena Bignami. Evaluating the feasi- bility of chatgpt in healthcare: an analysis of multiple clinical and research scenarios. Journal of Medical Systems, 47(1):33, 2023.
Myra Cheng, Esin Durmus, and Dan Jurafsky. Marked personas: Using natural language prompts to measure stereotypes in language models. In Proceedings of the 61st Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers), pp. 1504â1532, Toronto, Canada, July 2023. Association for Computational Linguistics. URL https://aclanthology. org/2023.acl-long.84. | 2308.03656#73 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 73 | Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. Advances in Neural Information Processing Systems, 35:20744â20757, 2022.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023a.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, 2023b.
15
Technical Report (v0.2)
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414, 2022. | 2308.03688#73 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 74 | 20 / 21
45 Eisenberger, N. I., Lieberman, M. D. & Williams, K. D. Does rejection hurt? An fMRI study of social exclusion. Science 302, 290-292 (2003).
46
Zhao, Y., Kou, G., Peng, Y. & Chen, Y. Understanding influence power of opinion leaders in e- commerce networks: An opinion dynamics theory perspective. Information Sciences 426, 131- 147 (2018).
47 Dandekar, P., Goel, A. & Lee, D. T. Biased assimilation, homophily, and the dynamics of polarization. Proceedings of the National Academy of Sciences 110, 5791-5796, doi:10.1073/pnas.1217220110 (2013).
48 Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N. & Cook, J. Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychological Science in the Public Interest 13, 106-131, doi:10.1177/1529100612451018 (2012).
49 Skinner, B. F. Two Types of Conditioned Reflex and a Pseudo Type. The Journal of General Psychology 12, 66-77, doi:10.1080/00221309.1935.9920088 (1935).
50 | 2308.03313#74 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 74 | [92] C. Qian, C. Han, Y. R. Fung, Y. Qin, Z. Liu, and H. Ji, âCreator: Disentangling abstract and concrete reasonings of large language models through tool creation,â arXiv preprint arXiv:2305.14318, 2023.
[93] Y. Cai, S. Mao, W. Wu, Z. Wang, Y. Liang, T. Ge, C. Wu, W. You, T. Song, Y. Xia et al., âLow-code llm: Visual programming over llms,â arXiv preprint arXiv:2304.08103, 2023.
[94] S. Arora, B. Yang, S. Eyuboglu, A. Narayan, A. Hojel, I. Trummer, and C. Ré, âLanguage models enable simple systems for generating structured views of heterogeneous data lakes,â arXiv preprint arXiv:2304.09433, 2023.
[95] W. Zhang, Y. Shen, W. Lu, and Y. Zhuang, âData-copilot: Bridging billions of data and humans with autonomous workflow,â arXiv preprint arXiv:2306.07209, 2023.
22
# A Detailed Dataset Description | 2308.03427#74 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 74 | Julian Coda-Forno, Kristin Witte, Akshay K Jagadish, Marcel Binz, Zeynep Akata, and Eric Schulz. Inducing anxiety in large language models increases exploration and bias. arXiv preprint arXiv:2304.11111, 2023.
Taya R Cohen, Scott T Wolf, Abigail T Panter, and Chester A Insko. Introducing the gasp scale: a new measure of guilt and shame proneness. Journal of personality and social psychology, 100 (5):947, 2011.
Maximilian Croissant, Madeleine Frister, Guy Schofield, and Cade McCall. An appraisal- based chain-of-emotion architecture for affective language model game agents. arXiv preprint arXiv:2309.05076, 2023.
Bruce N Cuthbert, Peter J Lang, Cyd Strauss, David Drobes, Christopher J Patrick, and Margaret M Bradley. The psychophysiology of anxiety disorder: Fear memory imagery. Psychophysiology, 40(3):407â422, 2003. | 2308.03656#74 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 74 | Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, et al. Codegeex: A pre-trained model for code generation with multilingual evaluations on humaneval-x. arXiv preprint arXiv:2303.17568, 2023.
Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103, 2017. | 2308.03688#74 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 75 | 50
50 Degroot, M. H. Reaching a Consensus. Journal of the American Statistical Association 69, 118- 121, doi:10.1080/01621459.1974.10480137 (1974).
51 Ben-Naim, E., Frachebourg, L. & Krapivsky, P. L. Coarsening and persistence in the voter model. Physical Review E 53, 3078-3087, doi:10.1103/PhysRevE.53.3078 (1996).
52 Slanina, F. & Lavicka, H. Analytical results for the Sznajd model of opinion formation. The European Physical Journal B - Condensed Matter and Complex Systems 35, 279-288, doi:10.1140/epjb/e2003-00278-0 (2003).
53 Friedkin, N. E. & Johnsen, E. C. Social influence and opinions. The Journal of Mathematical Sociology 15, 193-206, doi:10.1080/0022250X.1990.9990069 (1990).
54
Lorenz, J. A. N. CONTINUOUS OPINION DYNAMICS UNDER BOUNDED CONFIDENCE: A SURVEY. International Journal of Modern Physics C 18, 1819-1838, doi:10.1142/S0129183107011789 (2007).
55 | 2308.03313#75 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 75 | 22
# A Detailed Dataset Description
Simple SQL queries: These queries typically involve basic operations such as SELECT, FROM, WHERE, GROUP BY, etc. They are used to retrieve, filter, group, and sort data from a single table. We give the Schema of two tables in the SQL database in Table 12 and 13 and list several examples in Table 14.
Table 12: Schema of the Person table
Person Column Name Type School id name age sex school phone qualifications ability TEXT TEXT INTEGER TEXT TEXT TEXT TEXT TEXT Column Name Type id name info_985 info_211 TEXT TEXT TEXT TEXT
# Table 13: Schema of the School table
# Table 14: Demostrations of simple SQL queries.
Table ID Question Answer SQL reference Person Person School Average ages How many men How many schools are both â985â and â211â institutions? 35.16 12 11 select avg(age) from Person select count(*) from Person where sex = âmaleâ select count(*) from School where info_985 = âyesâ and info_211 = âyesâ; | 2308.03427#75 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 75 | Wei Dai, Jionghao Lin, Hua Jin, Tongguang Li, Yi-Shan Tsai, Dragan GaËsevi´c, and Guanliang Chen. Can large language models provide feedback to students? a case study on chatgpt. In 2023 IEEE International Conference on Advanced Learning Technologies (ICALT), pp. 323â325. IEEE, 2023.
Richard J Davidson. Affective neuroscience and psychophysiology: Toward a synthesis. Psy- chophysiology, 40(5):655â665, 2003.
Yinlin Deng, Chunqiu Steven Xia, Haoran Peng, Chenyuan Yang, and Lingming Zhang. Large language models are zero-shot fuzzers: Fuzzing deep-learning libraries via large language models. In Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 423â435, 2023.
Aniket Deroy, Kripabandhu Ghosh, and Saptarshi Ghosh. How ready are pre-trained abstractive models and llms for legal case judgement summarization? arXiv preprint arXiv:2306.01248, 2023.
Paul Ekman and Wallace V Friesen. Facial action coding system. Environmental Psychology & Nonverbal Behavior, 1978. | 2308.03656#75 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 75 | Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyuan Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, Y. Qiao, Zhaoxiang Zhang, and Jifeng Dai. Ghost in the minecraft: Generally capable agents for open-world environments via large language models with text-based knowledge and memory. ArXiv, abs/2305.17144, 2023.
16
Technical Report (v0.2)
# Part I Appendix
# Table of Contents
A Framework | 2308.03688#75 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 76 | 55
Lorenz, J. A stabilization theorem for dynamics of continuous opinions. Physica A: Statistical Mechanics and its Applications 355, 217-223, doi:https://doi.org/10.1016/j.physa.2005.02.086 (2005).
56
Wedin, E. & Hegarty, P. A Quadratic Lower Bound for the Convergence Rate in the One- Dimensional HegselmannâKrause Bounded Confidence Dynamics. Discrete & Computational Geometry 53, 478-486, doi:10.1007/s00454-014-9657-7 (2015).
57
Bhattacharyya, A., Braverman, M., Chazelle, B. & Nguyen, H. L. in Proceedings of the 4th conference on Innovations in Theoretical Computer Science 61â66 (Association for Computing Machinery, Berkeley, California, USA, 2013).
58
Hammer, M. R., Bennett, M. J. & Wiseman, R. Measuring intercultural sensitivity: The intercultural development inventory. International Journal of Intercultural Relations 27, 421- 443, doi:https://doi.org/10.1016/S0147-1767(03)00032-4 (2003). | 2308.03313#76 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 76 | Complex nested SQL queries: These queries contain subqueries, which are SQL queries nested inside a larger query. Nested queries can be used in various clauses such as SELECT, FROM, WHERE, and HAVING. They provide a way to perform multiple operations or calculations across multiple tables. We give the Schema of two tables in the SQL database in Table 15, 16, 17, and 18 and list several examples in Table 19.
Table 15: Schema of GoldenMelodyAwards
Table 16: Schema of the AwardNominees table
GoldenMelodyAwards Column Name Type AwardNominees Nominated_Count Competing_Count Awards_Count Award_Name Host Year INTEGER INTEGER INTEGER TEXT TEXT TIME Column Name Type Singer_ID Nominated_Work Award_Name Award_Edition_ID INTEGER INTEGER TEXT TEXT
Complex nested queries utilizing multiple tools: These are advanced queries that involve multiple tools, such as SQL queries, python code generation, user-defined functions, etc. We give the Schema
23
Table 17: Schema of the Singers table
Singers Column Name Type Name Song_Count Album_Count Fan_Count Gender Singer_ID TEXT INTEGER INTEGER INTEGER TEXT INTEGER
Table 18: Schema of the RecordCompanies table
Column Name Type Record_Company TEXT TIME Signing_Date INTEGER Singer_ID | 2308.03427#76 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 76 | Paul Ekman and Wallace V Friesen. Facial action coding system. Environmental Psychology & Nonverbal Behavior, 1978.
Zhiyu Fan, Xiang Gao, Martin Mirchev, Abhik Roychoudhury, and Shin Hwei Tan. Automated re- pair of programs from large language models. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pp. 1469â1481. IEEE, 2023.
Tanya Guitard, St´ephane Bouchard, Claude B´elanger, and Maxine Berthiaume. Exposure to a stan- dardized catastrophic scenario in virtual reality or a personalized scenario in imagination for gen- eralized anxiety disorder. Journal of clinical Medicine, 8(3):309, 2019.
Neil Harrington. The frustration discomfort scale: Development and psychometric properties. Clini- cal Psychology & Psychotherapy: An International Journal of Theory & Practice, 12(5):374â387, 2005.
Julie D Henry and John R Crawford. The short-form version of the depression anxiety stress scales (dass-21): Construct validity and normative data in a large non-clinical sample. British journal of clinical psychology, 44(2):227â239, 2005.
17
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | 2308.03656#76 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 76 | . . A.1 Traditional Evaluation Frameworks . A.2 Our Designed Evaluation Framework . . A.3 Implementation of Max-Flow Algorithm . B Operating System . B.1 Dataset details . B.2 Actions . . . B.3 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C Database C.1 Dataset Details . . C.2 Data Augmentation . . C.3 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D Knowledge Graph D.1 Dataset Details . . D.2 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . E Digital Card Game E.1 Dataset Details . . E.2 The Attributes of Fish . . E.3 Prompt Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F Lateral Thinking Puzzles . F.1 Dataset Details . . . F.2 Evaluation on LTP System . . F.3 LTP Game Progress and Termination . . | 2308.03688#76 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 77 | 59 Holley, R. A. & Liggett, T. M. Ergodic Theorems for Weakly Interacting Infinite Systems and the Voter Model. The Annals of Probability 3, 643-663 (1975).
60 Dittmer, J. C. Consensus formation under bounded confidence. Nonlinear Analysis: Theory, Methods & Applications 47, 4615-4621, doi:https://doi.org/10.1016/S0362-546X(01)00574-0 (2001).
61 Amblard, F., Bouadjio-Boulic, A., Gutiérrez, C. S. & Gaudou, B. in 2015 Winter Simulation Conference (WSC). 4021-4032.
62 Liu, S., He, L. & Max Shen, Z.-J. On-Time Last-Mile Delivery: Order Assignment with Travel- Time Predictors. Management Science 67, 4095-4119, doi:10.1287/mnsc.2020.3741 (2020).
63 Bien, J. & Tibshirani, R. Hierarchical Clustering With Prototypes via Minimax Linkage. Journal of the American Statistical Association 106, 1075-1084, doi:10.1198/jasa.2011.tm10183 (2011).
21 / 21 | 2308.03313#77 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 77 | Table 18: Schema of the RecordCompanies table
Column Name Type Record_Company TEXT TIME Signing_Date INTEGER Singer_ID
Table 19: Demostrations of complex nested SQL queries.
Question Answer SQL reference Golden Melody hosts, excluding the two with the least awards. Names of singers never nominated for Golden Melody Awards. Name and gender of singers without a record company. How many times is the 27th Golden Melody count of the 28thâs? "26th Golden Melody", "27th Golden Melody" "Jay Chou", "Jian Cui" "Penny Tai:Femal" 1 from Golden- select in ( MelodyAwards where Host not select Host from GoldenMelodyAwards group by Host order by avg ( Awards_Count ) asc limit 2 ) select Name from Singers where Singer_ID not in ( select Singer_ID from AwardNomi- nees ) Award_Name select Name, Gender from Singers where Singer_ID not in ( select Singer_ID from RecordCompanies ); select a.Awards_Count / b.Awards_Count from ( select Awards_Count from Gold- enMelodyAwards where Award_Name == â27th Golden Melodyâ ( select Awards_Count from GoldenMelodyAwards where Award_Name == â28th Golden Melodyâ ) b ) a , | 2308.03427#77 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 77 | 17
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Jen-tse Huang, Wenxuan Wang, Man Ho Lam, Eric John Li, Wenxiang Jiao, and Michael R Lyu. Revisiting the reliability of psychological scales on large language models. arXiv preprint arXiv:2305.19926, 2023a.
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, and Michael R Lyu. Who is chatgpt? benchmarking llmsâ psychological portrayal using psychobench. arXiv preprint arXiv:2310.01386, 2023b.
Guangyuan Jiang, Manjie Xu, Song-Chun Zhu, Wenjuan Han, Chi Zhang, and Yixin Zhu. Evaluat- ing and inducing personality in pre-trained language models. arXiv preprint arXiv:2206.07550, 2022.
Investigat- ing the ability of gpt-3.5 to express personality traits and gender differences. arXiv preprint arXiv:2305.02547, 2023. | 2308.03656#77 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 77 | Thinking Puzzles . F.1 Dataset Details . . . F.2 Evaluation on LTP System . . F.3 LTP Game Progress and Termination . . F.4 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . . . G House-holding . G.1 Dataset Details . G.2 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . H Web Shopping . H.1 Dataset Details . H.2 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . I Web Browsing I.1 Dataset Details . Prompt Example. I.2 . . . . . . . . . . . . . . . . . . . . . . . J Detailed Analysis J.1 Validity Analysis of Execution Outcomes . . . . . . J.1.1 Motivation of Validity Analysis . . J.1.2 Definition of Validity Analysis . J.1.3 Validity Analysis of Models . . Findings . . . . J.2.1 . . . Instruction Following Matters . J.2 . . . . . . . . . . . . . . . . . . . . . | 2308.03688#77 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 78 | of two tables in the SQL database in Table 20, and 21 and list several examples in Table 22. For verifying the planning ability of the LLM-based AI agents, we select this type of query.
Table 20: Schema of the Journal table
# Journal
Column Name Type TEXT Name TIME First_Issue_Date INTEGER Journal_ID Category TEXT Sponsor_Organization TEXT TEXT Country TEXT s Language INTEGER Publication_Count CoverPersonality Column Name Type Person_ID Journal_ID Count INTEGER INTEGER INTEGER
# Table 21: Schema of the CoverPersonality table
24 | 2308.03427#78 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 78 | Investigat- ing the ability of gpt-3.5 to express personality traits and gender differences. arXiv preprint arXiv:2305.02547, 2023.
Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, and Zhaopeng Tu. Is chatgpt a good translator? a preliminary study. arXiv preprint arXiv:2301.08745, 2023.
Sungmin Kang, Juyeon Yoon, and Shin Yoo. Large language models are few-shot testers: Explor- ing llm-based general bug reproduction. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pp. 2312â2323. IEEE, 2023.
Saketh Reddy Karra, Son The Nguyen, and Theja Tulabandhula. Estimating the personality of white-box language models. arXiv preprint arXiv:2204.12000, 2022.
Matthew C Keller and Randolph M Nesse. Is low mood an adaptation? evidence for subtypes with symptoms that match precipitants. Journal of affective disorders, 86(1):27â35, 2005. | 2308.03656#78 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 78 | . . J.2.1 . . . Instruction Following Matters . J.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2308.03688#78 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 79 | . s l o o t e l p i t l u m g n i z i l i t u s e i r e u q d e t s e n x e l p m o c f o s n o i t a r t s o m e D : 2 2 e l b a T e c n e r e f e r e d o C e c n e r e f e r L Q S s l o o T g n i n n a l P r e w s n A ; h t a m t r o p m i - r u o J m o r f e g a u g n a L , e m a N t c e l e s , " L P E R n o h t y P " [ , 8 0 . 0 2 [ - o p x e e h t ) 3 ( p x e . h t a m n r u t e r t c e l e s ( n i t o n D I _ l a n r u o J e r e h w l a n ] " r o t a r e n e G L Q S " , e s e n i h C : t s i m o n o c E e h T " t s i l d n a 3 ) y t i l a n o s r e P r e v o C m o r f D I _ l a n r u | 2308.03427#79 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 79 | Tom R Kupfer, Morgan J Sidari, Brendan P Zietsch, Patrick Jern, Joshua M Tybur, and Laura W Wesseldijk. Why are some people more jealous than others? genetic and environmental factors. Evolution and Human Behavior, 43(1):26â33, 2022.
Richard S Lazarus. Emotion and adaptation. Oxford University Press, 1991.
Mark R Leary. A brief version of the fear of negative evaluation scale. Personality and social psychology bulletin, 9(3):371â375, 1983.
Choonghyoung Lee, Jahyun Song, and Bill Ryan. When employees feel envy: The role of psycho- logical capital. International Journal of Hospitality Management, 105:103251, 2022.
Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, and Sowon Hahn. Chain of empathy: Enhancing empathetic response of large language models based on psychotherapy models. arXiv preprint arXiv:2311.04915, 2023.
Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, and Xing Xie. Large language models understand and can be enhanced by emotional stimuli. arXiv preprint arXiv:2307.11760, 2023a. | 2308.03656#79 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03427 | 80 | h T " t s i l d n a 3 ) y t i l a n o s r e P r e v o C m o r f D I _ l a n r u o J ] " . h s i l g n E : t s e g i D s â r e d a e R - n a l d n a s l a n r u o j ; h t a m t r o p m i - r u o J m o r f e g a u g n a L , e m a N t c e l e s , " L P E R n o h t y P " [ - i h C : t s i m o n o c E e h T " , 4 [ , l a i r o t c a f , ) 4 ( l a i r o t c a f . h t a m ( d c g . h t a m t c e l e s ( n i t o n D I _ l a n r u o J e r e h w l a n ] " r o t a r e n e G L Q S " - n E : t s e g i D s â r e d a e R , e s e n D C G h t i ) 2 1 2 ) y t i l a n o s r e P r e v o C m o r f D I _ l a | 2308.03427#80 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 80 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, and Xing Xie. The good, the bad, and why: Unveiling emotions in generative ai. arXiv preprint arXiv:2312.11111, 2023b.
Xingxuan Li, Yutong Li, Shafiq Joty, Linlin Liu, Fei Huang, Lin Qiu, and Lidong Bing. Does gpt-3 demonstrate psychopathy? evaluating large language models from a psychological perspective. arXiv preprint arXiv:2212.10529, 2022.
Tobias Luck and Claudia Luck-Sikorski. The wide variety of reasons for feeling guilty in adults: findings from a large cross-sectional web-based survey. BMC psychology, 10(1):1â20, 2022.
Ryan C Martin and Eric R Dahlen. The angry cognitions scale: A new inventory for assessing cognitions in anger. Journal of Rational-Emotive & Cognitive-Behavior Therapy, 25:155â173, 2007.
18
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | 2308.03656#80 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 80 | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 21 22 22 24 24 25 25 26 26 27 29 29 30 31 33 33 33 33 34 37 37 38 38 38 39 41 41 42 44 44 44 44 44 44 44 | 2308.03688#80 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 81 | e s e n D C G h t i ) 2 1 2 ) y t i l a n o s r e P r e v o C m o r f D I _ l a n r u o J ] " . h s i l g e h t t s i l o n h t i w . ; h t a m t r o p m i p u o r g l a n r u o J m o r f e g a u g n a L t c e l e s , " L P E R n o h t y P " [ ] " h s i l g n E " , 5 9 7 9 8 9 8 . 4 [ e r a u q s e h t ) 4 2 ( t r q s . h t a m - a c i l b u P ( g v a g n i v a h e g a u g n a L y b ] " r o t a r e n e G L Q S " y r e u q d n a - i l b u P ( g v a t c e l e s ( > ) t n u o C _ n o i t e g a u g n a l ) l a n r u o J m o r f ) t n u o C _ n o i t a c - m u n d e h s i l b u p e h t ; h t a m | 2308.03427#81 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 81 | 18
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
John D Mayer, Peter Salovey, and David R Caruso. Mayer-salovey-caruso emotional intelligence test (msceit) users manual. 2002.
Maril`u Miotto, Nicola Rossberg, and Bennett Kleinberg. Who is GPT-3? an exploration of person- ality, values and demographics. In Proceedings of the Fifth Workshop on Natural Language Pro- cessing and Computational Social Science (NLP+CSS), pp. 218â227, Abu Dhabi, UAE, Novem- ber 2022. Association for Computational Linguistics. URL https://aclanthology.org/ 2022.nlpcss-1.24.
Agnes Moors, Phoebe C Ellsworth, Klaus R Scherer, and Nico H Frijda. Appraisal theories of emotion: State of the art and future development. Emotion Review, 5(2):119â124, 2013. | 2308.03656#81 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03427 | 82 | o J m o r f ) t n u o C _ n o i t a c - m u n d e h s i l b u p e h t ; h t a m t r o p m i - n o s r e P r e v o C m o r f D I _ n o s r e P t c e l e s , " L P E R n o h t y P " [ - i X , i a H g n i Q " , 7 9 8 9 6 . 0 [ e s a b g o l ) 5 ( 0 1 g o l . h t a m ( x a m t c e l e s ( < t n u o C e r e h w y t i l a ] " r o t a r e n e G L Q S " o n a i t s i r C , g n a u H g n i m o a y f i t n e d i ) y t i l a n o s r e P r e v o C m o r f ) t n u o C ] " t n a y r B e b o K , o d l a n o R - r a e p p a - r e v o e h t y c n e u q e r f | 2308.03427#82 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 82 | Seishu Nakagawa, Hikaru Takeuchi, Yasuyuki Taki, Rui Nouchi, Atsushi Sekiguchi, Yuka Kotozaki, Carlos Makoto Miyauchi, Kunio Iizuka, Ryoichi Yokoyama, Takamitsu Shinada, et al. Compre- hensive neural networks for guilty feelings in young adults. Neuroimage, 105:248â256, 2015.
OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
Joowon Park, Sachin Banker, Tamara Masters, and Grace Yu-Buck. Person vs. purchase comparison: how material and experiential purchases evoke consumption-related envy in others. Journal of Business Research, 165:114014, 2023.
Susan M Pfeiffer and Paul TP Wong. Multidimensional jealousy. Journal of social and personal relationships, 6(2):181â196, 1989. | 2308.03656#82 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 82 | (a) Operating System (OS) Task: âFind the total number of non-empty directo- ries inside the â/etcâ directory.â Action Space: Any valid bash commands Observation: System standard output
(b) Database (DB) Task: âWhat was the total number of medals won by United States?â, given the table âOlympic Medalsâ Action space: Any valid SQL commands Observation: MySQL CLI interface output
(c) Knowledge Graph (KG) Task: âFind tropical cyclones that are similar to Hur- ricane Marie and affected Eastern North America.â Action space: Basic KG-querying tools Observation: Query results
(d) Digital Card Game (DCG) Task: âCompete against another player using four âfishâ cards in âAquawarâ game.â Action space: Four âfishâ cards and Assertion Observation: Battle process, status of âfishâ | 2308.03688#82 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 83 | n o i t s e u Q
e t a l u c l a C
# f o
# l a i t n e n
s e m a n
# e h t
# f o
s e g a u g
n o s r e p r e v o c o n h t i
# w
.
y t i l a
s â 4 e t u p m o C
e r a p m o c
d n a
2 1 2
# f o
s e g a u g n a l d n a
s e m a n
s l a n r u o j
# f o
y t i l a n o s r e p
# r e v o c
e t a l u c l a C
# 4 2 f o t o o r
# e h t
=
# r o f
e g a r e v a
# e s o h w
# f o
# r e b
s d e e c x e
# s e u s s i
e g a r e v a
l l a r e v o
# e h t
# e t u p m o C
# n e h t
,
¢
5
# f o
0 1
# s e r u g fi
# r e v o c
# n a h t
# s s e l
# g n i
# D
# I
e l b a T
r e v o C
&
# l a n r u o J | 2308.03427#83 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 83 | Susan M Pfeiffer and Paul TP Wong. Multidimensional jealousy. Journal of social and personal relationships, 6(2):181â196, 1989.
Haocong Rao, Cyril Leung, and Chunyan Miao. Can ChatGPT assess human personalities? a gen- eral evaluation framework. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 1184â1194, Singapore, Decem- ber 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-emnlp.84. URL https://aclanthology.org/2023.findings-emnlp.84.
Peter Romero, Stephen Fitz, and Teruo Nakatsuma. Do gpt language models suffer from split personality disorder? the advent of substrate-free psychometrics. Research Square preprint, 2023. doi: 10.21203/rs.3.rs-2717108/v1.
Ira J Roseman and Craig A Smith. Appraisal theory. Appraisal processes in emotion: Theory, methods, research, pp. 3â19, 2001.
James A Russell. A circumplex model of affect. Journal of personality and social psychology, 39 (6):1161, 1980. | 2308.03656#83 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 83 | (e) Lateral Thinking Puzzles (LTP) Task: âA man sleeps with the lights off, and the next morning he suicides after opening windows. Why?â Action Space: Any binary questions Observation: âYesâ, âNoâ, or âIrrelevantâ
=
(f) House-holding (HH) Task: âClean some soapbar and put it in coutertopâ Action space: A list of allowed actions in the room, or other accessible rooms Observation: Results after the action.
Lodge Bedspread Full/Queen Size Quilt with 2 Shams. Cabin 3-Piece Reversible All Season Quilt Set. Rustic Quilt Coverlet Bed Set. | Stonehurst Collection, | 2308.03688#83 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03656 | 84 | James A Russell. A circumplex model of affect. Journal of personality and social psychology, 39 (6):1161, 1980.
J´erËome Rutinowski, Sven Franke, Jan Endendyk, Ina Dormuth, and Markus Pauly. The self- perception and political biases of chatgpt. arXiv preprint arXiv:2304.07333, 2023.
John Sabini, Michael Siepmann, Julia Stein, and Marcia Meyerowitz. Who is embarrassed by what? Cognition & Emotion, 14(2):213â240, 2000.
John Sabini, Brian Garvey, and Amanda L Hall. Shame and embarrassment revisited. Personality and Social Psychology Bulletin, 27(1):104â117, 2001.
Mustafa Safdari, Greg Serapio-Garc´ıa, Cl´ement Crepy, Stephen Fitz, Peter Romero, Luning Sun, Marwa Abdulhai, Aleksandra Faust, and Maja Matari´c. Personality traits in large language mod- els. arXiv preprint arXiv:2307.00184, 2023.
Klaus R Scherer. Appraisal theory. 1999. | 2308.03656#84 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 84 | (g) Web Shopping (WS) Task: âLooking for a queen size bedspread set in the color redwood, and price lower than 70.â Action space: Search (generate keywords) and Click (choose from all clickable buttons) Observation: Productsâ descriptions; the webpage (h) Web Browsing (WB) Task: âFind a latest post with more than 10k upvotes in r/announcements community and upvote it.â Action space: 1) Choose one out of all HTML ele- ments in the webpage; 2) Click, Type, or Select Options Observation: Page HTML (optional: screenshot)
Figure 4: Examples of all environments in AGENTBENCH.
19
Technical Report (v0.2)
A FRAMEWORK
A.1 TRADITIONAL EVALUATION FRAMEWORKS
Traditional evaluation frameworks can be categorized into two types:
Traditional Tasks (e.g., single-turn generation, classification, etc.). These frameworks are designed for specific tasks and may not be suitable for more complex tasks involving multi-turn interactions.
Agent-based Tasks (tasks with multi-turn interactions). These frameworks are typically tailored to a specific task by the creators of the dataset. They often suffer from several limitations: | 2308.03688#84 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 85 | . s l a n r u o j
s s o r c a
# B Prompts Design
Figure 8: The evaluation prompt for tool order planning.
You are a strategy model. Given a problem and a set of tools, you need to â generate a sequence of tools to determine the solution to the problem. Each tool in the toolset is defined as follows: SQL Generator: Given an input problem and a database, it creates a â syntactically correct SQLite query statement. Python Generator: Given an input problem and some information, it generates << a syntactically correct Python code snippet. Please use the following format: Question: This is the original question. Error: This is the previously generated error output. Tool: These are the tools to be selected and the order in which they are <= called. Please note to generate a Tool different from the Error. Result: The final result output by the tool. Here are some examples of mapping problems to tools: Question: What is the square of the number of albums by Jolin Tsaif| Error: Tool: ["SQL Generator", "Python Generator"] Result: 100 Question: First, calculate the square of 40, denoted as A, and then find < the names of all the singers whose total number of fans is less than A. Error: Tool: ["Python Generator", "SQL Generator"] Result: ['Jolin Tsai'] Let's get started: Question: {question} Error: {error} Tool: | 2308.03427#85 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 85 | Klaus R Scherer. Appraisal theory. 1999.
Kotaro Shoji, Jinni A Harrigan, Stanley B Woll, and Steven A Miller. Interactions among situations, neuroticism, and appraisals in coping strategy choice. Personality and Individual Differences, 48 (3):270â276, 2010.
Kate Simpson, Dawn Adams, Kathryn Ambrose, and Deb Keen. âmy cheeks get red and my brain gets scaredâ: A computer assisted interview to explore experiences of anxiety in young children on the autism spectrum. Research in Developmental Disabilities, 113:103940, 2021.
19
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Mark JM Sullman. Anger amongst new zealand drivers. Transportation Research Part F: Traffic Psychology and Behaviour, 9(3):173â184, 2006.
Ala N. Tak and Jonathan Gratch. Is gpt a computational model of emotion? detailed analysis. arXiv preprint arXiv:2307.13779, 2023.
Bertil T¨orestad. What is anger provoking? a psychophysical study of perceived causes of anger. Aggressive Behavior, 16(1):9â26, 1990. | 2308.03656#85 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 85 | Agent-based Tasks (tasks with multi-turn interactions). These frameworks are typically tailored to a specific task by the creators of the dataset. They often suffer from several limitations:
They are designed for a specific task, limiting their applicability to other tasks. ⢠Communication between components (Task, Agent, and Evaluation) usually occurs within a single process or through the creation of child processes, necessitating evaluation on the same device.
⢠They can only evaluate one task with one agent at a time.
A.2 OUR DESIGNED EVALUATION FRAMEWORK
To address the limitations of traditional agent-based evaluation frameworks, we have designed a novel framework with the following features:
Decoupled S/C Architecture. Our framework decouples the Task Server, Agent Server, and Evalua- tion Client components, enabling separate deployments. They can communicate via HTTP interac- tions, allowing them to run on different devices, thus eliminating the need for co-location to satisfy the requirements of both Task and Agent.
Agent-Task Collaborative Evaluation. Our framework supports collaborative evaluation of multiple agents and tasks in various combinations simultaneously. This flexibility enables more comprehensive testing scenarios.
Network Flow Algorithms. We have incorporated network flow algorithms into the Evaluation Client, maximizing evaluation efficiency. This optimization ensures that both Agent and Task Workers are utilized to their fullest potential. | 2308.03688#85 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03656 | 86 | Bertil T¨orestad. What is anger provoking? a psychophysical study of perceived causes of anger. Aggressive Behavior, 16(1):9â26, 1990.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
Xintao Wang, Yaying Fei, Ziang Leng, and Cheng Li. Does role-playing chatbots capture the character personalities? assessing personality traits for role-playing chatbots. arXiv preprint arXiv:2310.17976, 2023.
David Watson, Lee Anna Clark, and Auke Tellegen. Development and validation of brief measures of positive and negative affect: the panas scales. Journal of personality and social psychology, 54 (6):1063, 1988.
Haoran Wu, Wenxuan Wang, Yuxuan Wan, Wenxiang Jiao, and Michael Lyu. Chatgpt or grammarly? evaluating chatgpt on grammatical error correction benchmark. arXiv preprint arXiv:2303.13648, 2023. | 2308.03656#86 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 86 | Network Flow Algorithms. We have incorporated network flow algorithms into the Evaluation Client, maximizing evaluation efficiency. This optimization ensures that both Agent and Task Workers are utilized to their fullest potential.
Resumable Evaluation. Our framework includes a resumable evaluation feature, making it easy to recover and continue interrupted evaluations seamlessly.
With these advancements, our evaluation framework overcomes the limitations of traditional ap- proaches and provides a more versatile, efficient, and scalable solution for evaluating intelligent agents in multi-turn tasks.
The overall structure of our framework can be described in Figure 5.
A.3
# IMPLEMENTATION OF MAX-FLOW ALGORITHM
In our evaluation process, we employ the EdmondsâKarp algorithm (Edmonds & Karp, 1972) as a practical implementation of the FordâFulkerson method (Ford Jr & FuËlkerson, 1962) designed to compute the maximum flow in a network with a time complexity of O(|V ||E|2). | 2308.03688#86 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 87 | You are a strategy model. Given a problem and a set of tools, you need to <â generate a sequence of tools to determine the solution to the problem. Each tool in the toolset is defined as follows: SQL Generator: Given an input problem and a database, it creates a <â syntactically correct SQLite query statement. Python Generator: Given an input problem and some information, it generates < a syntactically correct Python code snippet. Please use the following format: Question: This is the original question. Error: This is the previously generated error output. Tool: These are the tools to be selected and the order in which they are <= called. Please note to generate a Tool different from the Error. Query: This is the sub-problem derived from the original question that < needs to be input when calling the tool. Please note to generate a <= Query different from the Error. Result: The final result output by the tool. Here are some examples of mapping problems to tools: Question: What is the square of the number of albums by Jolin Tsai? Error: Tool: ["SQL Generator", "Python Generator"] Query: ["What is the number of albums by Jolin Tsai?", "What is the square <â of the number of albums by Jolin Tsai?"] Result: 100 Question: | 2308.03427#87 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 87 | Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, and Stacy Marsella. Investigating large lan- guage modelsâ perception of emotion using appraisal theory. arXiv preprint arXiv:2310.04450, 2023.
Hongli Zhan, Desmond Ong, and Junyi Jessy Li. Evaluating subjective cognitive appraisals of emotions from large language models. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 14418â14446, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023. findings-emnlp.962. URL https://aclanthology.org/2023.findings-emnlp. 962.
Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pp. 12697â12706. PMLR, 2021.
# A STATISTICS OF HUMAN SUBJECTS | 2308.03656#87 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 87 | To formalize the problem, consider a scenario with n agents, denoted as A1, A2, · · · , An, and m tasks, denoted as T1, T2, · · · , Tm. Our objective is to conduct evaluations in l different groups, each focusing on the pair (Axk , Tyk ), where 1 ⤠k ⤠l. Additionally, for every such pair (Axk , Tyk ), we should evaluate sk samples. The number of workers for agent Ak and task Tk is denoted as w(Ak) and w(Tk) respectively.
The flow graph we construct can be described as G =< V, E >, where the vertex set V is defined as
V ={Ak|1 ⤠k ⤠n} ⪠{Tk|1 ⤠k ⤠m} ⪠{S, D}, (1)
20
Technical Report (v0.2)
Workers Task Workers Agent a Mi a ubuntu Pa S > = & __Action % alle ~ Observation ie API API Maxflow Task Algorithm Controller Deployed Models {3 â&â3 API] [API e | on = Agent Server Assigner | 2308.03688#87 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 88 | ["What is the number of albums by Jolin Tsai?", "What is the square <â of the number of albums by Jolin Tsai?"] Result: 100 Question: First, calculate the square of 40, denoted as A, and then find < the names of all the singers whose total number of fans is less than A. Error: Tool: ["Python Generator", "SQL Generator"] Query: ["A is the square of 40, what is the value of A?", "What are the â names of all the singers whose total number of fans is less than A?"] Result: ['Jolin Tsai'] Let's get started: Question: {question} Error: {error} Tool: {tools} Query: | 2308.03427#88 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 88 | # A STATISTICS OF HUMAN SUBJECTS
This section presents the demographic distribution of the human subjects involved in our user study. At the beginning of the questionnaire, all human subjects are asked for this basic information in an anonymous form, protecting individualsâ privacy. We plot the distribution of age group, gender, region, education level, and employment status in Fig. 3, Fig. 4, Fig. 5, Fig. 6, and Fig. 7 respectively. We also plot each groupâs average results on PANAS, including positive and negative effects before and after imagining the given situations. With the results, we are able to instruct LLMs to realize a specific demographic group and measure the emotional changes to see whether the LLMs can simulate results from different human populations. For instance, an older female may exhibit a lower level of negative affect.
20
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Scores and Count Grouped by Age Group 7 Te Pave Bere fe Nemnve Bete fm Positive Aner F400 Ntte ater 2B ales Oe count | 350 26 300 ou ¢ 2350 2 g & 2 200, 20 10 18 100, 16 18-24 25-34 35-44 45-54 55-64 65+ âAge Group
Figure 3: Age group distribution of the human subjects. | 2308.03656#88 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 88 | Figure 5: The toolkit of AGENTBENCH is meticulously crafted for the seamless deployment of tasks and agents, coupled with an efficient evaluation assignment system. Agent servers (left) manifest in diverse forms, enabling us to deploy a model server and expose an accessible API through the HTTP protocol. Task servers (right) are composed of a task controller and several task workers, whose environment is within an isolated environment, ensuring freedom from conflicts and optimal task execution. Evaluation client (center) establishes an agent-task graph and employs the max-flow algorithm to optimize interactions. This optimization results in client workers seamlessly engaging with agent and task servers, facilitating the smooth execution of tasks and evaluations.
And the weighted edge set E is denoted as
E ={(Axk , Tyk , sk)|1 ⤠k ⤠l} ⪠{(S, Ak, w(Ak)|1 ⤠k ⤠n} ⪠{(Tk, D, w(Tk)|1 ⤠k ⤠m}. (2)
We apply max-flow algorithm from source vertex S to destination vertex D. For each flow edge (Ai, Tj, f(i,j)), we allocate f(i,j) samples for agent Ai and task Tj. After allocation, the weight of the edges should be reduced by the value of flow. Upon completion of an evaluation, the weight of edge connected to either S or D should be increased by 1. | 2308.03688#88 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03656 | 89 | Figure 3: Age group distribution of the human subjects.
Scores and Count Grouped by Gender 30 Positive Before cad SS onveate F700 7 ert Ate =e count F600 500 By 7 é 400 5 5 2 By» Co 300 20 200 18 100 16 0 Prefer not to say
Figure 4: Gender distribution of the human subjects.
Scores and Count Grouped by Region Pete ae â Negative Before | 1999 32.5 SO Positive After Nepatve Ate 30.0 ee Count 800 275 £50 head = 5 3 6 22.5 400 20.0 200 Wns L: 15.0 ° United Kingdom: Africa Oceania North America Europe Asia Region
Figure 5: Region distribution of the human subjects.
21
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
28 26 pry Scores and Count Grouped by Education Level Positive Before Negative Before Positive Aer Negative Aer 44.46% =e Count 35.36% 600 500 400 300 200 100 Lower secondary school Upper secondary school University - Bachelors Education Level University - Masters University - Doctorate Count
Figure 6: Education level distribution of the human subjects.
30 28 6 Scores 2 Scores and Count Grouped by Employment Status Positive Bere 75.67% Sm Negative Before Positive After âNegative Aer =e count i Student âUnemployed Employed Retired Employment Status 1000 800 200 | 2308.03656#89 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 89 | We also establish a periodic interval for applying the algorithm to the network for newly available evaluation triples.
# B OPERATING SYSTEM
B.1 DATASET DETAILS
Construction Details. Each evaluation sample in OS dataset encompasses following contents:
Instruction. The description of the problem in natural language that needs LLMs to solve. ⢠Docker Environment. local-os/default).
# default
Initialization Script (Optional). The bash scripts that need to be executed independently (docker exec) before the interaction starts (e.g., user configurations, files, system statuses). ⢠Start Script (Optional). The bash scripts executed after shell is created and before interaction. ⢠Checking Pipeline. The checking method to judge the correctness of LLMs answer or operation. ⢠Example Script (Optional). The bash scripts that serve as reference solutions. In other words, if executing them in the interaction, results are correct. Only for unit tests that introduced below.
We design two types of tasks in the OS evaluation beyond conventional QA-only evaluation.
Question Answering (QA): LLMs need to output commands to solve specific questions in OS (e.g., aggregate numbers, view file contents). In this case, they must commit answers finally. ⢠Operation: LLMs need to output commands to do some verifiable operations on the operating system (e.g., change file/user states). In this case, they do not need to commit final answers. | 2308.03688#89 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 90 | You are a strategy model. Given a problem and a set of tools, you need to â generate a sequence of tools to determine the solution to the problem. Each tool in the toolset is defined as follows: SQL Generator: Given an input problem and a database, it creates a â syntactically correct SQLite query statement. Python Generator: Given an input problem and some information, it generates â a syntactically correct Python code snippet. Please use the following format: Question: This is the original question Error: This is the previously generated error output Tasks: This is a list in Python. Each item in the list is a dictionary. The ~ key of the dictionary represents the selected Tool, and the value is <â the Query when calling the tool. Please note to generate a Tool and â Query different from the Error. Answer: The final answer Here are some examples of mapping problems to tools: Question: What is the square of the number of albums by Jolin Tsaiff Error: Tasks: [{{"SQL Generator": "What is the number of albums by Jolin Tsai?"}}, = {{"Python Generator": "What is the square of the number of albums by so Jolin Tsai?"}}] Answer: The square of the number of albums by Jolin Tsai is 100 Question: | 2308.03427#90 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 90 | Thanks to the checking pipeline, two types of tasks can be evaluated in a unified solution.
21
# Technical Report (v0.2)
Collecting challenging queries regarding OS could be difficult. In practice, about half of our instructions are created or collected from humans, while the other half are mostly QA problems generated by gpt-4 and strictly filtered by passing the unit tests (i.e., yield correct answers/states).
For human instructions, we first gather 6000 real problems and solutions with bash or shell tag from Stack Overflow3. Then we sort them by the score (count of likes). We invite 8 annotators majored in programming to select challenging ones. For each selected problem, they create one or more task instructions and write a detailed problem description, the initialization script, the starting script, and the checking pipeline. Finally, we conduct a cross verification for each evaluation sample to make sure itâs correct. For each problem, it takes about 2 hours to do the annotation.
For generated problems, our unit test contains the following parts. 1) Initialization Script Correction: we execute the initialization script and remove samples with wrong initialization whose exit code does not equal to 0. 2) Example Code Correction: we execute the example code and the checking pipeline to judge the correctness of the answer. We remove samples with wrong answers. | 2308.03688#90 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 91 | "What is the square of the number of albums by so Jolin Tsai?"}}] Answer: The square of the number of albums by Jolin Tsai is 100 Question: First, calculate the square of 40, denoted as A, and then find â the names of all the singers whose total number of fans is less than A. Error: Tasks: [{{"Python Generator": "A is the square of 40, what is the value of os A?"}}, {{"SQL Generator": "What are the names of all the singers whose â total number of fans is less than A?"}}] Answer: Jolin Tsai You must note that: The generated Tasks must strictly meet the format â requirements: it must be a list in Python, each item in the list is a = dictionary, the key of the dictionary represents the selected Tool, and â the value is the Query when calling the tool. Let's get started: Question: {question} Error: {error} Tasks: """ | 2308.03427#91 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 91 | In the end, we curate 144 high-quality diverse OS evaluation samples accompanied with testing interactive environments and corresponding checking pipelines (i.e., scripts). Agents are prompted with 1-shot CoT to better format their responses (Cf. Appendix B).
Evaluation Setup. For each problem (i.e., instruction), the execution can be divided into 3 parts.
⢠Initialization. We create a docker container with a specific image, and we run an initialization bash script to set up environments specified by the instruction.
⢠Interaction. We start a new shell in this docker, and run the starting bash script specified by the instruction. Then the LLM to test is fed with a piece of instruction and the problem description. It starts interaction with the shell. In each turn, two actions are provides. One is to run bash script, which allows the model to generate and run a series of commands in the shell. The other is to commit answer, which allows the model to terminate the interaction process. Itâs notable that the model will be judged that it fail to solve the problem if exceeding round limit (8 by default). | 2308.03688#91 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 92 | 28
Figure 11: The prompt added to Figure 10 for tool-subtask pair planning with other unrelated tools.
Each tool in the toolset is defined as follows: SQL Generator: Given an input problem and a database, it creates a â syntactically correct SQLite query statement. Python Generator: Given an input problem and some information, it generates <= a syntactically correct Python code snippet. Weather Query Tool: Given a location, it outputs the real-time weather of = that location. Image Generator: Given a text description, it generates a related image. Text Extractor: Given a link to an image, it extracts the corresponding <= text and its position coordinates. Translator: Given a piece of text, it translates it into other languages. Bing Searcher: Given a piece of text, it conducts a search in the Bing â browser and returns the content. Shell Generator: Given an input problem and some information, it generates <= a syntactically correct Shell script. Java Generator: Given an input problem and some information, it generates a <= syntactically correct Java code snippet. Wikipedia Searcher: Given a piece of text, it conducts a search in <= Wikipedia and returns the content. Office Suite: Given a text description, it automatically generates the < corresponding long document, table, or PPT. Movie Player: Given a movie name, it automatically plays the corresponding <â movie resource.
29 | 2308.03427#92 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 92 | there is a checking pipeline containing a list of scripts f1, f2, · · · , fn, where fk denotes the k-th script piece in the pipeline. For fk, the answer of the model, o0, and the output of ft(t < k), ot, will be fed as input arguments into fk, i.e., ok = fk(o0, o1, · · · , okâ1). The result is correct if and only if all the scripts exit with code 0.
Metrics. We measure the Success Rate for LLMs to solve problems in the execution. There are only two final status for each item of the problems, wrong or correct.
B.2 ACTIONS
In OS evaluation, we design two major types of actions: bash and commit.
Bash: which launches a bash command (using textual inputs in content field) ⢠Commit: which announces the completion of the goal. If the task is a QA problem, then the agent should submit the final answer in content field; else the checking pipeline will automatically check the system status to judge the correctness.
B.3 PROMPT EXAMPLE
A prompt for OS evaluation consists of the instruction and the formulation of interaction trajectory. An example of instruction prompt is: | 2308.03688#92 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 93 | B.3 PROMPT EXAMPLE
A prompt for OS evaluation consists of the instruction and the formulation of interaction trajectory. An example of instruction prompt is:
You are an assistant that will act like a person, Iâwill play the role of linux(ubuntu) operating system. Your goal is to implement the operations required by me or answer to the question proposed by me. For each of your turn, you should first think what you should do, and then take exact one of the three actions: "bash", "finish" or " answer".
1. If you think you should execute some bash code, take bash action, and you should print like this:
# 3https://stackoverflow.com/
22
Technical Report (v0.2)
Think: put your thought here. Act: bash âââbash # put your bash code here âââ
2. If you think you have finished the task, take finish action, and you should print like this:
Think: put your thought here.
Act: finish
3. If you think you have got the answer to the question, take answer action, and you should print like this:
Think: put your thought here.
Act: answer(Your answer to the question should be put in this pair of parentheses) | 2308.03688#93 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 94 | You are a strategic model. Given a problem and a set of tools, you need to generate <+ the next tool to be called and the corresponding subtask. Each tool in the toolset is defined as follows: SQL Generator: Given an input question and a database, it creates a syntactically â correct SQLite query statement. PythonREPL: Given an input question and some information, it generates a segment of < syntactically correct Python code. Please use the following format: Question: This is the question History: This is the history of previously generated sub-problems; if it's empty, it «+ means there are no historical information currently Tool_Query: This is a dictionary in Python, where the key represents the chosen <+ Tool, and the value is the query input when invoking the Tool. Result: This is the output result of the current Tool_Query Tool History: This is the history of all previously generated sub-problems Tool_Query: 'None' signifies that the Final_Answer can be derived Result: 'None' signifies that the Final_Answer can be derived Final_Answer: This is the final answer; when the history is sufficient to reason out < the answer, provide the Final_Answer directly In the above format, ... signifies that (History/Tool_Query/Result) can be repeated | 2308.03427#94 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 94 | Think: put your thought here.
Act: answer(Your answer to the question should be put in this pair of parentheses)
If the output is too long, I will truncate it. The truncated output is not complete. You have to deal with the truncating problem by yourself. Attention, your bash code should not contain any input operation. Once again, you should take only exact one of the three actions in each turn.
The trajectory is organized in CoT styles, and we use an 1-shot example to make model better understand the action space like the following.
<USER>: {{ INSTRUCTION }} Now, my problem is: Tell me how many files are in the directory "/etc"? <AGENT>: Think: To count the files in /etc, I need to print all the files in it. Act: bash âââbash ls /etc âââ <USER>: The output of the OS: cpi cron.hourly fuse.conf iproute2 lvm networkd-dispatcher protocols selinux tmpfiles.d [truncated because the output is too long] <AGENT>: Think: The output has been truncated because it is too long, so I need to count files by script directly. Act: bash âââbash ls -1 /etc | wc -l âââ
23
Technical Report (v0.2)
<USER>: | 2308.03688#94 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 95 | history is sufficient to reason out < the answer, provide the Final_Answer directly In the above format, ... signifies that (History/Tool_Query/Result) can be repeated = N times. When you can get the Final_Answer, you can generate an empty Tool_Query and Result, < and provide the Final_Answer Please stop after generating the Result line or the Final_Answer line. Below are some examples: Question: First calculate the square of 40 as A, and find the names of all singers <â whose total fan count is less than A. History: Tool_Query:{{"PythonREPL": "A is the square of 40, what is the value of A?"}} Result : 1600 History: The Tool_Query for the first tool execution was:{{"PythonREPL": "A is the <+ square of 40, what is the value of A?"}}, Result:1600 Tool_Query:{{"SQL Generator": "Find the names of all singers whose total fan count â is less than A"}} Result: Jolin Tsai History: The Tool_Query for the first tool execution was: {{"PythonREPL": "A is the <+ square of 40, what is the value of A?"}}, Result: 1600 The Tool_Query for the second tool execution was: {{"SQL Generator": "Find <> the names of all | 2308.03427#95 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 95 | 23
Technical Report (v0.2)
<USER>:
The output of the OS: 220 <AGENT>: Think: Now I get the answer, it is 220. Act: answer(220) <USER>: Now, I will start a new problem in a new OS. My problem is: {{ PROBLEM }} <LOOP>: <AGENT>: {{ RESPONSE }} Swicth AGENT.Action: "bash": <User>: The output of the OS: {{ OUTPUT }} "answer" or "finish": Executing Checking Script Exit Loop otherwise: Exit Loop
C DATABASE
C.1 DATASET DETAILS
Construction Details. We acquire the source queries and databases via reusing and amalgamating several established datasets: WikiSQL (Zhong et al., 2017), WikiTableQuestions (Pasupat & Liang, 2015), SQA (Iyyer et al., 2017), HybridaQA (Chen et al., 2020), and FeTaQA (Nan et al., 2021), ensuring the diversity of instructions and data. | 2308.03688#95 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 96 | 40, what is the value of A?"}}, Result: 1600 The Tool_Query for the second tool execution was: {{"SQL Generator": "Find <> the names of all singers whose total fan count is less than A"}}, <â Result: Jolin Tsai Tool_Query:None Result : Final_Answer: Jolin Tsai Note: The generated Tool_Query must strictly comply with the format requirements, «+ and only one Tool_Query can be generated each time. Do not perform additional < problem analysis, strictly adhere to the format of the problem, and generate < output similar to the examples. Now let's get started: Question: {question} History: {history} Tool_Query: | 2308.03427#96 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 96 | To further enrich (and avoid leakage from) the dataset, we employed gpt-3.5-turbo to per- form data augmentation. Provided with the header information and original rows of a table, gpt-3.5-turbo generates ten new rows. Using the name, header information, and some SQL examples, we task gpt-3.5-turbo with generating five additional SQL queries. Each acquired SQL statement is then fed sequentially into gpt-3.5-turbo with instructions to rephrase the sentences without changing their original meanings. The valid entries are filtered and sampled into the final dataset with 1599 entries, categorized into three basic types of DB operations: select, insert, or update.
# As a result, each sample in the dataset comprises:
Instruction. A piece of description delineating the problem and guiding the agentâs action. ⢠Table Info. Explanations about the table name and column names (i.e., meta information). ⢠Table Content. The actual contents within the table, utilized to create the database. ⢠Correct Answer. For selection-type samples, it is a text answer; for other entry types (i.e., insert,
update), it is the hash code of the correctly modified table.
Evaluation Setup. We assess each problem in the dataset through the following procedure: | 2308.03688#96 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 97 | update), it is the hash code of the correctly modified table.
Evaluation Setup. We assess each problem in the dataset through the following procedure:
Initialization. An initial SQL script is constructed based on the table content, and a MySQL database is initialized in a docker container, which provides a forwarded port for interaction. ⢠Interaction. An initial prompt guides the agent to provide an executable SQL command along with its reasoning. The agent is provided with the prompt, instruction, and table information description, and it is expected to return a response in given format. We execute the SQL and
24
# Technical Report (v0.2)
directly return the result to the agent, continuing this loop until the agent commits its final answer or encounters an error (e.g., reaching the maximum round limit or failing to parse the action). ⢠Checking. For selection-type problems, we compare the agentâs answer with the standard text answer, disregarding the order, but expecting an exact match. If the answer is a single number, all equivalent representations are accepted (e.g., 5, "5.0", â+5â are considered identical). For insertion or updating types of problems, we calculate and compare the hash of the table after the agentâs operation with the hash of the table after the correct SQL operation. | 2308.03688#97 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 98 | You are an SQLite expert. Given an input question, first, generate a â grammatically correct SQLite query to execute. Then examine the query <â results and provide an answer to the input question. Unless a specific number of examples to retrieve is specified in the <= question, use the LIMIT clause to query for a maximum of 5 results. Do not query all columns in the table. You must only query the columns â necessary to answer the question. Please only use the column names you can see in the table below. Be careful < not to query columns that do not exist. Additionally, be aware of which <= column is in which table. Please use the following format: Question: This is the question. SQLQuery: The SQL query to be executed. SQLResult: The result of the SQL query execution. Answer: The final answer. Note to only use the tables below: CREATE TABLE Person (
id TEXT,
name TEXT,
age INTEGER,
sex TEXT,
school TEXT,
phone TEXT,
qualifications TEXT,
ability TEXT
)
/*
3 rows from person table:
id name age sex school phone qualifications ability | 2308.03427#98 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 98 | Metrics. We measure the Success Rate of agents in completing instructions. Overall success rate is the macro average of the rate of three categories.
C.2 DATA AUGMENTATION
We elaborate on the data augmentation of three types of DB tasks based on the existing SQL datasets (Zhong et al., 2017; Pasupat & Liang, 2015; Iyyer et al., 2017; Chen et al., 2020; Nan et al., 2021), which are all QA problems without some common operations including inserting and updating. We first tested the validity of the raw data and then randomly sample from each category from filtered data to form the final dataset. We adopt gpt-3.5-turbo to enrich and rewrite the original instructions.
⢠Insert: Given the name, the header information, and the original rows of a table, we generate 5 SQL statements for insertion. Later we rephrase the sentences without changing their meaning (using shorter or longer expressions or changing the order).
⢠Update: Given the name, the header information, and the previously generated 5 SQL statements for insertion, we generate 5 SQL statements for modification based on the given statements. We rephrase the sentences following the above standard.
To ensure data quality, each augmented query statement are required to pass the unit test scripts. | 2308.03688#98 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 99 | TEXT
)
/*
3 rows from person table:
id name age sex school phone qualifications ability
01 Wang Min 32 Female Beijing University of Technology 13938493271 Undergraduate Tourism Industry-related Work
02 Li Liang 27 Male Beijing University of Technology 13812764851 Master Internet Company Operations
03 Zhang Jing 50 Female Wuhan University of Technology 13764592384 Master Editor of Publishing House
*/
CREATE TABLE School (
id TEXT,
name TEXT,
info\_985 TEXT, o
info\_211 TEXT
)
/*
3 rows from school table:
id name o info\_985 info\_211
01 Central South University yes <= yes
02 Shandong University yes yes
03 Tsinghua ~ University yes yes
*/ t it Question: What is the average age of the people? | 2308.03427#99 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 99 | To ensure data quality, each augmented query statement are required to pass the unit test scripts.
The query type of tasks fall into the traditional scope of Text-to-SQL evaluation, and we only sample and categorize for evaluation. Each query statement in existing datasets is classified into following types: âCountingâ, âAggregation-MINâ, âAggregation-MAXâ, âAggregation-AVGâ, âAggregation- SUMâ, âRankingâ, or âComparisonâ. Each one can only belong to one type. The remaining will be categorized as "Other".
C.3 PROMPT EXAMPLE
We use the following format of prompts:
User: I will ask you a question, then you should help me operate a MySQL
database with SQL to answer the question.
You have to explain the problem and your solution to me and write down your thoughts.
After thinking and explaining thoroughly, every round you can choose to operate or to answer.
your operation should be like this: Action: Operation âââsql SELECT * FROM table WHERE condition; âââ You MUST put SQL in markdown format without any other comments. Your SQL
should be in one line. | 2308.03688#99 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 100 | should be in one line.
Every time you can only execute one SQL statement. I will only execute the statement in the first SQL code block. Every time you write a SQL , I will execute it for you and give you the output.
If you are done operating, and you want to commit your final answer, then write down:
Action: Answer Final Answer: ["ANSWER1", "ANSWER2", ...] DO NOT write this pattern unless you are sure about your answer. I expect
an accurate and correct answer.
25
Technical Report (v0.2)
Your answer should be accurate. Your answer must be exactly the same as
the correct answer.
If the question is about modifying the database, then after done operation, your answer field can be anything.
If your response cannot match any pattern I mentioned earlier, you will be judged as FAIL immediately.
Your input will be raw MySQL response, you have to deal with it by yourself.
# D KNOWLEDGE GRAPH
D.1 DATASET DETAILS
Construction Details. In an effort to gauge the decision-making abilities of LLMs, specifically their proficiency in long-term planning, we have meticulously compiled a dataset sourced from pre-existing knowledge base question answering (KBQA) datasets on FREEBASE, including GrailQA (Gu et al., 2021), ComplexWebQuestions (Talmor & Berant, 2018), and GraphQuestions (Su et al., 2016). | 2308.03688#100 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 101 | We envisage KBQA as a tool learning setting, thereby outfitting the LLM with an array of KG- querying tools. By leveraging the S-expressions annotated in (Gu & Su, 2022), we can accurately establish the optimal sequence of tool applications corresponding to each question. In order to sustain a high degree of difficulty in the tasks, we have opted to preserve only those questions which necessitate a minimum of five instances of tool invocation. Through this rigorous selection methodology, we have accrued a dataset consisting of 1,663 questions. Each data entry in the dataset has the following fields:
Input Question. A natural language utterance that involves intricate KG information seeking. ⢠Topic Entities. A set of topic entities mentioned in the input question. We obviate the need of
performing entity linking, allowing the LLM to focus on long-term planning.
Action Sequence. The gold action sequence (i.e., tool invocations) that leads to the target answer. ⢠Gold Answer. The gold answer to the question, typically characterized by a set of KG entities.
Note that, in contrast to interacting with databases in AgentBench, where the particulars and content of the database are integrated into the input, describing an extensive KG to the LLM is not particularly feasible. This task is characterized by a partially observable environment, which is a critical aspect of its nature. | 2308.03688#101 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 102 | You are an SQL expert. Given an input question, you need to create a <= syntactically correct SQL query statement. Please only use the following datasets, which include four table names: GoldenMelodyAward, Singers, AwardNominee, Singers, and RecordCompanies. The column names and types of each table can be obtained from the create commands in the table below: Ii ld CREATE TABLE GoldenMelodyAward (
Nominated\_Count INTEGER,
<= Competing\_Count INTEGER,
Awards\_Count INTEGER,
Award\_Name = TEXT,
Host TEXT,
Year TIME
)
CREATE TABLE AwardNominees (
Singer_ID INTEGER,
Nominated\_Work s TEXT,
Award\_Name TEXT,
Award_Edition_ID INTEGER
)
CREATE TABLE Singers(
Name TEXT,
Song\_Count INTEGER,
<= Album\_Count INTEGER,
Fan\_Count INTEGER,
| 2308.03427#102 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 102 | Evaluation Setup. To support our evaluation, we first host the latest version of FREEBASE using Virtuoso.4 Due to the complexity of SPARQL queries, we decide not to burden the LLM with crafting SPARQL queries by itself. Instead, we implement a series APIs that interface with the Virtuoso backend, allowing the LLM to query the KG more effortlessly.
We use the first 500 tasks from the datest for evaluation. Each task, when successfully executed, should ideally proceed through the following phases.
⢠Initialization. We prompt the LLM with the concrete task description, including the concrete description of each KG-querying tool that we provide.
Interaction. During this phase, the LLM is expected to invoke different tools to access the KG and accumulate the necessary information to respond accurately to the question. Importantly, the process is entirely autonomous, meaning the LLM determines the workflow entirely by itself. ⢠Final Answer Prediction. During its interaction with the KG, the LLM may generate a list of variables, each one representing a unique set of entities. If the LLM determines that one particular variable should signify the final answer, it will present this variable as its output and conclude the task. | 2308.03688#102 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 103 | Song\_Count INTEGER,
<= Album\_Count INTEGER,
Fan\_Count INTEGER,
Singer\_ID INTEGER, <=
Gender TEXT
)
CREATE TABLE RecordCompanies (
Record\_Company TEXT,
Singer\_Date <= TIME,
Singer_ID INTEGER
)
You can query one or more tables at the same time. Be careful not to query â non-existent table names or column names. Also, please note which «= column is in which table. Please use the following format when answering: Question: This is the question Answer: The SQL query statement to be executed | 2308.03427#103 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 103 | Metrics. We use F1 score as the primary evaluation metric in our study, calculated by comparing the modelâs predicted answers to the gold standard answers. In addition to F1 score, we also use the Exact Match metric. However, unlike previous studies that measure Exact Match based on the logical form, we assess it based on the exact match between the predicted and gold answer sets.
# 4https://github.com/dki-lab/Freebase-Setup
26
Technical Report (v0.2)
Lastly, we also evaluate the Executability of the action sequences generated by the model. If the modelâs action sequence produces any set of answers when executed, it scores 1.0 for Executability. If it fails to produce an answer, it scores 0.
D.2 PROMPT EXAMPLE
Task description:
User: You are an agent that answers questions based on the knowledge stored in a knowledge base. To achieve this, you can use the following tools to
query the KB.
1. get_relations(variable: var) -> list of relations A variable can be either an entity or a set of entities (i.e., the result
variable can be either an entity or a set of entities (i.e., the result of a previous query). This function helps to navigate all relations
of a previous query). This function helps to navigate all relations in the KB connected to the variable, so you can decide which relation | 2308.03688#103 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 104 | of a previous query). This function helps to navigate all relations in the KB connected to the variable, so you can decide which relation
is the most useful to find the answer to the question.
A simple use case can be âget_relations(Barack Obama)â, which finds all relations/edges starting from the entity Barack Obama.
The argument of get_relations should always be an entity or a variable (e .g., #0) and not anything else.
2. get_neighbors(variable: var, relation: str) -> variable Given a variable, this function returns all entities connected to the
variable via the given relation. Note that, get_neighbors() can only be used after get_relations() is used to find a set of viable relations.
A simple use case can be âget_neighbors(Barack Obama, people.person. profession)â, which returns the profession of Obama in Freebase.
3. intersection(variable1: var, variable2: var) -> variable Given two variables, this function returns the intersection of the two
variables. The two variables MUST be of the same type!
4. get_attributes(variable: var) -> list of attributes This function helps to find all numerical attributes of the variable.
Please only use it if the question seeks for a superlative accumulation (i.e., argmax or argmin). | 2308.03688#104 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 105 | You are an SQL expert. Given an input question, you need to create a â syntactically correct SQL query statement. Please only use the following datasets, which include four table names: GoldenMelodyAward, Singers, AwardNominee, Singers, and RecordCompanie. The column names and types of each table can be obtained from the create commands in the table below: iis CREATE TABLE GoldenMelodyAward (
Nominated\_Count INTEGER,
<= Competing\ Count INTEGER,
Awards\_Count INTEGER,
Award\_Name <= TEXT,
Host TEXT,
Year TIME
)
CREATE TABLE AwardNominees (
Singer_ID INTEGER,
Nominated\_Work TEXT,
Award\_Name TEXT,
Award_Edition_ID INTEGER
)
CREATE TABLE Singers(
Name TEXT,
Song\_Count INTEGER,
+ Album\_Count INTEGER,
Fan\_Count INTEGER,
Singer\_ID | 2308.03427#105 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 105 | Please only use it if the question seeks for a superlative accumulation (i.e., argmax or argmin).
5. argmax(variable: var, attribute: str) -> variable Given a variable, this function returns the entity with the maximum value
of the given attribute. It can only be used after get_attributes() is used to find a set of viable attributes.
A simple use case can be âargmax(variable, age)â, which returns the oldest entity belonging to the variable.
6. argmin(variable: var, attribute: str) -> variable Given a variable, this function returns the entity with the minimum value
of the given attribute. It can only be used after get_attributes() is used to find a set of viable attributes.
A simple use case can be âargmin(variable, age)â, which returns the youngest entity belonging to the variable.
7. count(variable: var) -> int Given a variable, this function returns the number of entities belonging
to the variable.
After a variable is produced along the process, you need to judge whether a variable is the final answer to the question. Each variable is represented as an id starting from 0. For example, #0 is the first variable, #1 is the second variable, and so on. | 2308.03688#105 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 106 | INTEGER,
+ Album\_Count INTEGER,
Fan\_Count INTEGER,
Singer\_ID INTEGER, «=
Gender TEXT
)
CREATE TABLE RecordCompanies (
Record\_Company TEXT,
Singer\_Date . TIME,
Singer_ID INTEGER
)
You can query one or more tables at the same time. Be careful not to query â non-existent table names or column names. Also, please note which <= column is in which table. Please note that you are not proficient in nested SQL, when encountering <â complex problems, you can think step by step to generate multiple < non-nested SQL statements. For example: Question: Some minor languages are used by no more than 3 countries, what < are the source countries of these languages/?| Thought: First generate the ist SQL ["\select Official_Language from Country <= group by Official_Language having count(*) > 3'', and assume that the â result of this SQL is resulti, then generate the 2nd SQL | âselect Name = from Country where Official_Language not in result1''. Answer: select Name from | 2308.03427#106 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 106 | Once you find the answer, respond with âFinal Answer: #idâ, where id is the id of the variable that you think is the final answer. For
27
Technical Report (v0.2)
example, if you think #3 is the final answer, you MUST respond with â Final Answer: #3â.
You can only take ONE action at a time!! After you get the observation from its execution, you can take another action. You can take at most 15 actions to find the answer to the question.
Given the inherent complexity associated with enabling LLMs to query the KB, it has been observed that, in a zero-shot setting, LLMs struggle to generate any outputs of substantive relevance. As a result, we additionally provide a teaching example in our prompt:
Here is a demo that you can use to get started.
Question: the bipropellant rocket engine with kerosene and gas-generator cycle is designed by who?
# Entities: [Gas-generator cycle, Kerosene]
Thought: I need to first find engines with gas-generator cycle. To acheive this, I will query the KB to find relations connected to the entity âGas-generator cycleâ and see if any of them can help me find the answer.
# Action: get_relations(Gas-generator cycle) | 2308.03688#106 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 107 | that the â result of this SQL is resulti, then generate the 2nd SQL | âselect Name = from Country where Official_Language not in result1''. Answer: select Name from Country where Official_Language not in ( select ~ Official_Language from Country group by Official_Language having = count(*) > 3 ) Please use the following format when answering: Question: This is the question Thought: This is the thought process Answer: This is the final SQL query statement | 2308.03427#107 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 107 | # Action: get_relations(Gas-generator cycle)
Observation: [spaceflight.rocket_engine_cycle.rocket_engines, spaceflight .satellite.orbiting, spaceflight.rocket_stage.fuel, spaceflight. satellite_manufacturer.spacecraft_manufactured, automotive.fuel. engines]
Thought: From the above relations, I might use spaceflight. rocket_engine_cycle.rocket_engines to find the engines of Gas- generator cycle.
Action: get_neighbors(Gas-generator cycle, spaceflight. rocket_engine_cycle.rocket_engines)
Observation: variable #0, which are instances of spaceflight. bipropellant_rocket_engine
Thought: I also need to find engines with kerosene. To acheive this, I will query the KB to find relations connected to the entity âKerosene â.
Action: get_relations(Kerosene)
Observation: [spaceflight.satellite_manufacturer.spacecraft_manufactured, automotive.fuel.engines, spaceflight.rocket_engine_cycle. rocket_engines]
Thought: From the above relations, I might use spaceflight. rocket_engine_cycle.rocket_engines to find engines with kerosene cycle. | 2308.03688#107 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 108 | Thought: From the above relations, I might use spaceflight. rocket_engine_cycle.rocket_engines to find engines with kerosene cycle.
Action: get_neighbors(Kerosene, spaceflight.rocket_engine_cycle. rocket_engines)
Observation: variable #1, which are instances of spaceflight. bipropellant_rocket_engine
Thought: The engines with both gas-generator cycle and kerosene cycle shoule be the intersection of variable #0 and variable #1.
Action: intersection(#0, #1)
28
Technical Report (v0.2)
Observation: variable #2, which are instances of spaceflight. bipropellant_rocket_engine
Thought: Now I need to find who designed such engines, which will be the final answer. To acheive this, I will query the KB to find relations connected to the variable #2.
Action: get_relations(#2)
Observation: [spaceflight.rocket_engine.manufactured_by, spaceflight. rocket_engine.designed_by, spaceflight.rocket_engine.design_period, spaceflight.rocket_engine.status]
Thought: From the above relations, I might use spaceflight.rocket_engine. designed_by to find the designer of the engines. | 2308.03688#108 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 109 | Transform a math problem into a solution function that can be executed using o Python/|s math library. Use the output of running this code to answer the <â question. Please use the following format: History: Information output from previous tool invocation Question: A question about mathematics Error: This is the error output previously generated PythonSolution: A Python solution, make sure to generate a PythonSolution different < from the one in Error, for example, ## Python Solution def solution(): Python statement Answer: The final answer Below are some demonstrations of mapping math problems to PythonSolution: History: The original question was: What is 37593 * 67h Question: What is 37593 * 67/7] Error: PythonSolution: ## Python Solution def solution(): import math return 37593 * 67 Answer: 2518731 History: The original question was: What is the 1/5th power of 37593) Question: What is the i/5th power of 37593f Error: PythonSolution: ## Python Solution def solution(): import math return 37593 ** 1/5 Answer: 8.222831614237718 History: The original question was: What is the logarithm of 5 with base 10F7 Question: What is the logarithm of 5 with base 107 Error: PythonSolution: ## Python Solution def solution(): import math return math.log(5, | 2308.03427#109 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 109 | Thought: From the above relations, I might use spaceflight.rocket_engine. designed_by to find the designer of the engines.
Action: get_neighbors(#2, spaceflight.rocket_engine.designed_by)
Observation: variable #3, which are instances of pspaceflight. rocket_engine_designer
Thought: #3 is the final answer to the question, which represents the target rocket engine designer.
Final Answer: #3
# E DIGITAL CARD GAME
E.1 DATASET DETAILS
Construction Details. We use Aquawar framework as the basis for our interactive system. The first type of interaction is the action phase, where the model needs to select the fish it wants to act with and then choose the target for skill. To ensure the validity of model operations, we perform checks for valid actions. The second type of interaction is the guess phase, where we provide the model with known information, including fish species and skill descriptions, enemyâs targets. We have two naive strategies (random and greedy search) for testing purposes. The following is a detailed definition and description of the game process.
⢠Player and Cards. It is a two-player battle game with four pet fishes (i.e., cards) in each team. The card pool consists of ten fish (Appendix E.2), and both players choose four definite fish to use before the start of the game. | 2308.03688#109 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 110 | ⢠Initial State. Each fish has 400 initial health, 200 initial attack power, active ability, and passive ability.
Basic Rule. Players choose a live fish to use its active skill or normal attack on an enemy fish each round. All alive fishâs passive ability will automatically trigger when meeting certain conditions. ⢠Assertion Mechanism. The identity of a playerâs fish is initially hidden. The counter-player can guess one of the playerâs fishâs identities each round. If the counter-player guesses correctly, the playerâs fishâs identity is revealed, and all its fish will get damaged.
⢠Round Process. Within a round of the game, the player for that round will first assert the identity of one opponentâs fish that are alive and whose identities have not been revealed. If the assertion is correct, all of the opponentâs fish that remain alive get damaged. Subsequently, the player for that round can command one alive fish to execute a normal attack or an active ability. Following this, any fish that meet the condition will unleash its passive ability.
Victory Condition. The victory condition is to have more fish alive at the end of the game. | 2308.03688#110 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03688 | 111 | Victory Condition. The victory condition is to have more fish alive at the end of the game.
To balance agent engagement and game complexity simultaneously, we designed two stages of game logic. We remove the assertions in the first stage while keeping assertions in the second stage. We test all the models on both the first and second stages separately and choose the average performance for final score.
29
# Technical Report (v0.2)
We choose two naive playing strategies as the baselines.
The first strategy is a simply random action from all available action spaces. ⢠The second strategy will try to use AOE attack if possible, and continuously evaluating whether a one-hit kill is possible. Then, it attempts to use active skills and, finally, resorts to normal attacks. Overall, this strategy follows a certain pattern but may not necessarily be the most optimal one.
Evaluation Setup. For each time of the game playing, we evaluate with the following steps:
⢠Initialization. We initiated the modified game logic environment, which uses pybind to compile, and the baseline game agent under the Ubuntu 20.04 environment. | 2308.03688#111 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 112 | You are a strategy model and given a problem and a set of tools, you need << to generate a sequence of executable tools to determine the solution to < the problem. Each tool in the toolset is defined as follows: SQL Generator: Given an input problem and a database, create a <= syntactically correct SQLite query statement. PythonREPL: Given an input problem and some information, generate a <â syntactically correct Python code. Please use the following format: Question: Here is the question Error: Here is the previously generated error output Tasks: Here is a Python List type, where each item in the List is a <= dictionary. The key of the dictionary represents the selected tool, and <= the value is the query input when calling the tool. Please note that <â the generated Tool and Query should be different from those in the = Error. Answer: The final answer Here are some examples mapping the question to the tools: Question: What is the square of the number of albums by Jolin Tsai? Error: Tasks: [{{SQL Generator: "What is the number of albums by Jolin Tsai?"}}, o {{PythonREPL: "What is the square of the number of albums by Jolin = Tsai?"}}] Answer: The square of the number of albums by Jolin Tsai | 2308.03427#112 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 112 | ⢠Initialization. We initiated the modified game logic environment, which uses pybind to compile, and the baseline game agent under the Ubuntu 20.04 environment.
Interaction. We place rule descriptions in the instruction prompt according to different game stages, and the LLM agent interacts and competes strategically with the baseline within the game logic environment. We give the LLM agent five chances to respond in the correct format. It will be immediately deemed defeated if it fails to output legal actions within the given number of attempts. At the same time, we encourage the model to output its reasoning process in CoT. ⢠Result Calculation. During the Interaction process, we will record the entire game process for
battle playback and calculate the game results to obtain the metrics for the task.
Metrics. Our comprehensive evaluation uses metrics that range from basic gameplay elements such as the wining rounds (Win Round) , total played rounds (Total Round), winning rate (Win Rate) , the total damage inflicted compared to total health (Damage Rate), and ultimately we provide a final reward score according to the above metrics:
reward = 0.7 Ã metricwinrate + 0.3 Ã metricdamagerate
E.2 THE ATTRIBUTES OF FISH
The game has ten kinds of fish according to the game rules.
⢠Spray | 2308.03688#112 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03427 | 113 | {{PythonREPL: "What is the square of the number of albums by Jolin = Tsai?"}}] Answer: The square of the number of albums by Jolin Tsai is 100 Question: First, calculate the square of 40 and denote it as A. Then, find <= the names of all artists with a total number of fans less than A. Error: Tasks: [{{PythonREPL: "Let A be the square of 40. What is the value of s A?"}}, {{SQL Generator: "Find the names of all artists with a total <= number of fans less than A"}}] Answer: Jolin Tsai Note that you must ensure that the generated Tasks strictly adhere to the <= format requirements: they must be in Python List type, where each item <= is a dictionary. The key of the dictionary represents the selected tool, <= and the value is the query input when calling the tool. Now, let's proceed: Question: {question} Error: {error} Tasks: | 2308.03427#113 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 113 | E.2 THE ATTRIBUTES OF FISH
The game has ten kinds of fish according to the game rules.
⢠Spray
Counter (Passive): Inflicts 30 damage to the attacker when a teammateâs health is below 30% - AOE (Active): Attacks all enemies for 35% of its attack points.
Flame
Counter (Passive): Inflicts 30 damage to the attacker when a teammateâs health is below 30% - Infight (Active): Inflicts 75 damage on one living teammate and increases your attack points by 140. ⢠Eel
Deflect (Passive): Distributes 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. - AOE (Active): Attacks all enemies for 35% of its attack points.
⢠Sunfish
Deflect (Passive): Distributes 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. - Infight (Active): Inflicts 75 damage on one living teammate and increases your attack points by 140.
Barracuda
Reduce (Passive): There is a 30% chance to avoid any incoming damage each time. - Crit (Active): Deals 120 CRITICAL damage to an enemy.
⢠Mobula | 2308.03688#113 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.