doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.04026 | 25 | Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2023. Codegen: An open large language model for code with multi-turn program synthesis.
7
OpenAI. 2023. Gpt-4 technical report.
Joon Sung Park, Joseph C. OâBrien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2023. Generative agents: Interactive sim- ulacra of human behavior.
Joon Sung Park, Lindsay Popowski, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2022. Social simulacra: Creating popu- lated prototypes for social computing systems.
David Premack and Guy Woodruff. 1978. Does the chimpanzee have a theory of mind? Behavioral and brain sciences, 1(4):515â526.
Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. 2023. Communicative agents for software de- velopment. | 2308.04026#25 | AgentSims: An Open-Source Sandbox for Large Language Model Evaluation | With ChatGPT-like large language models (LLM) prevailing in the community,
how to evaluate the ability of LLMs is an open question. Existing evaluation
methods suffer from following shortcomings: (1) constrained evaluation
abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that
task-based evaluation, where LLM agents complete tasks in a simulated
environment, is a one-for-all solution to solve above problems. We present
AgentSims, an easy-to-use infrastructure for researchers from all disciplines
to test the specific capacities they are interested in. Researchers can build
their evaluation tasks by adding agents and buildings on an interactive GUI or
deploy and test new support mechanisms, i.e. memory, planning and tool-use
systems, by a few lines of codes. Our demo is available at
https://agentsims.com . | http://arxiv.org/pdf/2308.04026 | Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen | cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary)
14F05, 14J26 (Secondary) 68T42 | submit to EMNLP2023 demo track | null | cs.AI | 20230808 | 20230808 | [
{
"id": "2009.03300"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2303.17580"
}
] |
2308.04030 | 25 | # References
Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Al- shamsi, Alessandro Cappelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Hes- low, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. 2023. Falcon-40B: an open large language model with state-of-the-art performance.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732.
BIG bench authors. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of lan- guage models. Transactions on Machine Learning Research.
Ãngel Alexander Cabrera, Erica Fu, Donald Bertucci, Kenneth Holstein, Ameet Talwalkar, Jason I. Hong, and Adam Perer. 2023. Zeno: An interactive frame- work for behavioral evaluation of machine learning. In CHI Conference on Human Factors in Computing
Systems, CHI â23, New York, NY, USA. Association for Computing Machinery. | 2308.04030#25 | Gentopia: A Collaborative Platform for Tool-Augmented LLMs | Augmented Language Models (ALMs) empower large language models with the
ability to use tools, transforming them into intelligent agents for real-world
interactions. However, most existing frameworks for ALMs, to varying degrees,
are deficient in the following critical features: flexible customization,
collaborative democratization, and holistic evaluation. We present gentopia, an
ALM framework enabling flexible customization of agents through simple
configurations, seamlessly integrating various language models, task formats,
prompting modules, and plugins into a unified paradigm. Furthermore, we
establish gentpool, a public platform enabling the registration and sharing of
user-customized agents. Agents registered in gentpool are composable such that
they can be assembled together for agent collaboration, advancing the
democratization of artificial intelligence. To ensure high-quality agents,
gentbench, an integral component of gentpool, is designed to thoroughly
evaluate user-customized agents across diverse aspects such as safety,
robustness, efficiency, etc. We release gentopia on Github and will
continuously move forward. | http://arxiv.org/pdf/2308.04030 | Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu | cs.AI | null | null | cs.AI | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2305.20050"
},
{
"id": "2305.07759"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2305.09645"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2301.12726"
},
{
"id": "2302.06706"
},
{
"id": "2303.17580"
},
{
"id": "2110.08193"
},
{
"id": "2109.01652"
},
{
"id": "2306.08302"
}
] |
2308.03983 | 26 | Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. In Ad- Language models are few-shot learners. vances in Neural Information Processing Systems, volume 33, pages 1877â1901. Curran Associates, Inc.
Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. 2023. Quantifying memorization across neural lan- guage models. In The Eleventh International Confer- ence on Learning Representations.
Harrison Chase. 2022. LangChain. | 2308.03983#26 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.04026 | 26 | Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580.
Hao Sun, Zhexin Zhang, Jiawen Deng, Jiale Cheng, and Minlie Huang. 2023. Safety assessment of chinese large language models.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Man- dlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and An- ima Anandkumar. 2023a. Voyager: An open-ended embodied agent with large language models.
Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023b. Is chatgpt a good nlg evaluator? a preliminary study.
Lilian Weng. 2023. Llm-powered autonomous agents. lilianweng.github.io. | 2308.04026#26 | AgentSims: An Open-Source Sandbox for Large Language Model Evaluation | With ChatGPT-like large language models (LLM) prevailing in the community,
how to evaluate the ability of LLMs is an open question. Existing evaluation
methods suffer from following shortcomings: (1) constrained evaluation
abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that
task-based evaluation, where LLM agents complete tasks in a simulated
environment, is a one-for-all solution to solve above problems. We present
AgentSims, an easy-to-use infrastructure for researchers from all disciplines
to test the specific capacities they are interested in. Researchers can build
their evaluation tasks by adding agents and buildings on an interactive GUI or
deploy and test new support mechanisms, i.e. memory, planning and tool-use
systems, by a few lines of codes. Our demo is available at
https://agentsims.com . | http://arxiv.org/pdf/2308.04026 | Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen | cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary)
14F05, 14J26 (Secondary) 68T42 | submit to EMNLP2023 demo track | null | cs.AI | 20230808 | 20230808 | [
{
"id": "2009.03300"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2303.17580"
}
] |
2308.04030 | 26 | Systems, CHI â23, New York, NY, USA. Association for Computing Machinery.
Shawn Callegari. 2023. Semantic Kernel: Integrate cutting-edge LLM technology quickly and easily into your apps.
Harrison Chase. 2023. LangChain: Next Generation Language Processing.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality. | 2308.04030#26 | Gentopia: A Collaborative Platform for Tool-Augmented LLMs | Augmented Language Models (ALMs) empower large language models with the
ability to use tools, transforming them into intelligent agents for real-world
interactions. However, most existing frameworks for ALMs, to varying degrees,
are deficient in the following critical features: flexible customization,
collaborative democratization, and holistic evaluation. We present gentopia, an
ALM framework enabling flexible customization of agents through simple
configurations, seamlessly integrating various language models, task formats,
prompting modules, and plugins into a unified paradigm. Furthermore, we
establish gentpool, a public platform enabling the registration and sharing of
user-customized agents. Agents registered in gentpool are composable such that
they can be assembled together for agent collaboration, advancing the
democratization of artificial intelligence. To ensure high-quality agents,
gentbench, an integral component of gentpool, is designed to thoroughly
evaluate user-customized agents across diverse aspects such as safety,
robustness, efficiency, etc. We release gentopia on Github and will
continuously move forward. | http://arxiv.org/pdf/2308.04030 | Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu | cs.AI | null | null | cs.AI | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2305.20050"
},
{
"id": "2305.07759"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2305.09645"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2301.12726"
},
{
"id": "2302.06706"
},
{
"id": "2303.17580"
},
{
"id": "2110.08193"
},
{
"id": "2109.01652"
},
{
"id": "2306.08302"
}
] |
2308.04026 | 27 | Lilian Weng. 2023. Llm-powered autonomous agents. lilianweng.github.io.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models.
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. Agieval: A human-centric benchmark for evaluating foundation models.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. 2023. Lima: Less is more for alignment. | 2308.04026#27 | AgentSims: An Open-Source Sandbox for Large Language Model Evaluation | With ChatGPT-like large language models (LLM) prevailing in the community,
how to evaluate the ability of LLMs is an open question. Existing evaluation
methods suffer from following shortcomings: (1) constrained evaluation
abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that
task-based evaluation, where LLM agents complete tasks in a simulated
environment, is a one-for-all solution to solve above problems. We present
AgentSims, an easy-to-use infrastructure for researchers from all disciplines
to test the specific capacities they are interested in. Researchers can build
their evaluation tasks by adding agents and buildings on an interactive GUI or
deploy and test new support mechanisms, i.e. memory, planning and tool-use
systems, by a few lines of codes. Our demo is available at
https://agentsims.com . | http://arxiv.org/pdf/2308.04026 | Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen | cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary)
14F05, 14J26 (Secondary) 68T42 | submit to EMNLP2023 demo track | null | cs.AI | 20230808 | 20230808 | [
{
"id": "2009.03300"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2303.17580"
}
] |
2308.04030 | 27 | Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. arXiv preprint arXiv:2110.14168.
Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, and Rahul Gupta. 2021. Bold: Dataset and metrics for measuring biases in open-ended language genera- tion. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 862â872.
Ronen Eldan and Yuanzhi Li. 2023. Tinystories: How small can language models be and still speak coherent english? arXiv preprint arXiv:2305.07759.
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. 2023. Specializing smaller language models towards multi-step reasoning. arXiv preprint arXiv:2301.12726. | 2308.04030#27 | Gentopia: A Collaborative Platform for Tool-Augmented LLMs | Augmented Language Models (ALMs) empower large language models with the
ability to use tools, transforming them into intelligent agents for real-world
interactions. However, most existing frameworks for ALMs, to varying degrees,
are deficient in the following critical features: flexible customization,
collaborative democratization, and holistic evaluation. We present gentopia, an
ALM framework enabling flexible customization of agents through simple
configurations, seamlessly integrating various language models, task formats,
prompting modules, and plugins into a unified paradigm. Furthermore, we
establish gentpool, a public platform enabling the registration and sharing of
user-customized agents. Agents registered in gentpool are composable such that
they can be assembled together for agent collaboration, advancing the
democratization of artificial intelligence. To ensure high-quality agents,
gentbench, an integral component of gentpool, is designed to thoroughly
evaluate user-customized agents across diverse aspects such as safety,
robustness, efficiency, etc. We release gentopia on Github and will
continuously move forward. | http://arxiv.org/pdf/2308.04030 | Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu | cs.AI | null | null | cs.AI | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2305.20050"
},
{
"id": "2305.07759"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2305.09645"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2301.12726"
},
{
"id": "2302.06706"
},
{
"id": "2303.17580"
},
{
"id": "2110.08193"
},
{
"id": "2109.01652"
},
{
"id": "2306.08302"
}
] |
2308.03983 | 28 | Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vin- odkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Mor- | 2308.03983#28 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.04030 | 28 | Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, and Aleksan- dra Faust. 2023. A real-world webagent with plan- ning, long context understanding, and program syn- thesis. arXiv preprint arXiv:2307.12856.
Dan Hendrycks, Steven Basart, Saurav Kadavath, Man- tas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. 2021a. Measuring coding challenge competence with apps. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. In International Conference on Learning Representations.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021b. Measuring mathematical problem solving with the math dataset. In Thirty- fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). | 2308.04030#28 | Gentopia: A Collaborative Platform for Tool-Augmented LLMs | Augmented Language Models (ALMs) empower large language models with the
ability to use tools, transforming them into intelligent agents for real-world
interactions. However, most existing frameworks for ALMs, to varying degrees,
are deficient in the following critical features: flexible customization,
collaborative democratization, and holistic evaluation. We present gentopia, an
ALM framework enabling flexible customization of agents through simple
configurations, seamlessly integrating various language models, task formats,
prompting modules, and plugins into a unified paradigm. Furthermore, we
establish gentpool, a public platform enabling the registration and sharing of
user-customized agents. Agents registered in gentpool are composable such that
they can be assembled together for agent collaboration, advancing the
democratization of artificial intelligence. To ensure high-quality agents,
gentbench, an integral component of gentpool, is designed to thoroughly
evaluate user-customized agents across diverse aspects such as safety,
robustness, efficiency, etc. We release gentopia on Github and will
continuously move forward. | http://arxiv.org/pdf/2308.04030 | Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu | cs.AI | null | null | cs.AI | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2305.20050"
},
{
"id": "2305.07759"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2305.09645"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2301.12726"
},
{
"id": "2302.06706"
},
{
"id": "2303.17580"
},
{
"id": "2110.08193"
},
{
"id": "2109.01652"
},
{
"id": "2306.08302"
}
] |
2308.03983 | 29 | Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Mor- eira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. | 2308.03983#29 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.04030 | 29 | Shima Imani, Liang Du, and Harsh Shrivastava. 2023. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398.
Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Struct- gpt: A general framework for large language model arXiv preprint to reason over structured data. arXiv:2305.09645.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. Advances in neural information processing systems, 35:22199â 22213.
Abhay Kondi. 2023. SuperAGI: Open-source frame- work to build, manage and run useful Autonomous AI Agents.
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Letâs verify step by step. arXiv preprint arXiv:2305.20050. | 2308.04030#29 | Gentopia: A Collaborative Platform for Tool-Augmented LLMs | Augmented Language Models (ALMs) empower large language models with the
ability to use tools, transforming them into intelligent agents for real-world
interactions. However, most existing frameworks for ALMs, to varying degrees,
are deficient in the following critical features: flexible customization,
collaborative democratization, and holistic evaluation. We present gentopia, an
ALM framework enabling flexible customization of agents through simple
configurations, seamlessly integrating various language models, task formats,
prompting modules, and plugins into a unified paradigm. Furthermore, we
establish gentpool, a public platform enabling the registration and sharing of
user-customized agents. Agents registered in gentpool are composable such that
they can be assembled together for agent collaboration, advancing the
democratization of artificial intelligence. To ensure high-quality agents,
gentbench, an integral component of gentpool, is designed to thoroughly
evaluate user-customized agents across diverse aspects such as safety,
robustness, efficiency, etc. We release gentopia on Github and will
continuously move forward. | http://arxiv.org/pdf/2308.04030 | Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu | cs.AI | null | null | cs.AI | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2305.20050"
},
{
"id": "2305.07759"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2305.09645"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2301.12726"
},
{
"id": "2302.06706"
},
{
"id": "2303.17580"
},
{
"id": "2110.08193"
},
{
"id": "2109.01652"
},
{
"id": "2306.08302"
}
] |
2308.03983 | 30 | Paul F Christiano, Jan Leike, Tom Brown, Miljan Mar- tic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- augmented language model pre-training. In Proceed- ings of the 37th International Conference on Machine Learning, ICMLâ20. JMLR.org.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations. | 2308.03983#30 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.04030 | 30 | Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs.
Anton Osika. 2023. GPT-Engineer: Specify what you want it to build, the AI asks for clarification, and then builds it.
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Ji- apu Wang, and Xindong Wu. 2023. Unifying large language models and knowledge graphs: A roadmap. arXiv preprint arXiv:2306.08302.
Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel R Bowman. 2021. Bbq: A hand-built bias benchmark for question answering. arXiv preprint arXiv:2110.08193.
Toran Bruce Richards. 2023. Auto-GPT: An Au- tonomous GPT-4 Experiment.
Sasha Rush. 2023. MiniChain: A tiny library for coding with large language models. | 2308.04030#30 | Gentopia: A Collaborative Platform for Tool-Augmented LLMs | Augmented Language Models (ALMs) empower large language models with the
ability to use tools, transforming them into intelligent agents for real-world
interactions. However, most existing frameworks for ALMs, to varying degrees,
are deficient in the following critical features: flexible customization,
collaborative democratization, and holistic evaluation. We present gentopia, an
ALM framework enabling flexible customization of agents through simple
configurations, seamlessly integrating various language models, task formats,
prompting modules, and plugins into a unified paradigm. Furthermore, we
establish gentpool, a public platform enabling the registration and sharing of
user-customized agents. Agents registered in gentpool are composable such that
they can be assembled together for agent collaboration, advancing the
democratization of artificial intelligence. To ensure high-quality agents,
gentbench, an integral component of gentpool, is designed to thoroughly
evaluate user-customized agents across diverse aspects such as safety,
robustness, efficiency, etc. We release gentopia on Github and will
continuously move forward. | http://arxiv.org/pdf/2308.04030 | Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu | cs.AI | null | null | cs.AI | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2305.20050"
},
{
"id": "2305.07759"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2305.09645"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2301.12726"
},
{
"id": "2302.06706"
},
{
"id": "2303.17580"
},
{
"id": "2110.08193"
},
{
"id": "2109.01652"
},
{
"id": "2306.08302"
}
] |
2308.03983 | 31 | Ahmet Iscen, Alireza Fathi, and Cordelia Schmid. 2023. Improving image recognition by retrieving from In Proceedings of the web-scale image-text data. IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 19295â19304.
Peter Izsak, Moshe Berchansky, Daniel Fleischer, and Ronen Laperdon. 2023. fastRAG: Efficient Retrieval Augmentation and Generation Framework.
Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. IEEE Billion-scale similarity search with GPUs. Transactions on Big Data, 7(3):535â547.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the
2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769â6781, Online. Association for Computational Linguistics.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. In Advances in Neural Information Processing Systems. | 2308.03983#31 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.04030 | 31 | Sasha Rush. 2023. MiniChain: A tiny library for coding with large language models.
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580.
Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny
Zhou, , and Jason Wei. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Alpaca: A strong, replicable instruction-following model. Stanford Center for Research on Foundation Models., 3(6):7. | 2308.04030#31 | Gentopia: A Collaborative Platform for Tool-Augmented LLMs | Augmented Language Models (ALMs) empower large language models with the
ability to use tools, transforming them into intelligent agents for real-world
interactions. However, most existing frameworks for ALMs, to varying degrees,
are deficient in the following critical features: flexible customization,
collaborative democratization, and holistic evaluation. We present gentopia, an
ALM framework enabling flexible customization of agents through simple
configurations, seamlessly integrating various language models, task formats,
prompting modules, and plugins into a unified paradigm. Furthermore, we
establish gentpool, a public platform enabling the registration and sharing of
user-customized agents. Agents registered in gentpool are composable such that
they can be assembled together for agent collaboration, advancing the
democratization of artificial intelligence. To ensure high-quality agents,
gentbench, an integral component of gentpool, is designed to thoroughly
evaluate user-customized agents across diverse aspects such as safety,
robustness, efficiency, etc. We release gentopia on Github and will
continuously move forward. | http://arxiv.org/pdf/2308.04030 | Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu | cs.AI | null | null | cs.AI | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2305.20050"
},
{
"id": "2305.07759"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2305.09645"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2301.12726"
},
{
"id": "2302.06706"
},
{
"id": "2303.17580"
},
{
"id": "2110.08193"
},
{
"id": "2109.01652"
},
{
"id": "2306.08302"
}
] |
2308.03983 | 32 | Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge- In Advances in Neural Infor- intensive nlp tasks. mation Processing Systems, volume 33, pages 9459â 9474. Curran Associates, Inc.
Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74â81, Barcelona, Spain. Association for Computational Linguistics.
Yu A. Malkov and D. A. Yashunin. 2020. Efficient and robust approximate nearest neighbor search using IEEE hierarchical navigable small world graphs. Trans. Pattern Anal. Mach. Intell., 42(4):824â836. | 2308.03983#32 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.04030 | 32 | Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971.
Karthik Valmeekam, Sarath Sreedharan, Matthew Mar- quez, Alberto Olmo, and Subbarao Kambhampati. 2023. On the planning abilities of large language models (a critical investigation with a proposed benchmark). arXiv preprint arXiv:2302.06706.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Man- dlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv: Arxiv-2305.16291. | 2308.04030#32 | Gentopia: A Collaborative Platform for Tool-Augmented LLMs | Augmented Language Models (ALMs) empower large language models with the
ability to use tools, transforming them into intelligent agents for real-world
interactions. However, most existing frameworks for ALMs, to varying degrees,
are deficient in the following critical features: flexible customization,
collaborative democratization, and holistic evaluation. We present gentopia, an
ALM framework enabling flexible customization of agents through simple
configurations, seamlessly integrating various language models, task formats,
prompting modules, and plugins into a unified paradigm. Furthermore, we
establish gentpool, a public platform enabling the registration and sharing of
user-customized agents. Agents registered in gentpool are composable such that
they can be assembled together for agent collaboration, advancing the
democratization of artificial intelligence. To ensure high-quality agents,
gentbench, an integral component of gentpool, is designed to thoroughly
evaluate user-customized agents across diverse aspects such as safety,
robustness, efficiency, etc. We release gentopia on Github and will
continuously move forward. | http://arxiv.org/pdf/2308.04030 | Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu | cs.AI | null | null | cs.AI | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2305.20050"
},
{
"id": "2305.07759"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2305.09645"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2301.12726"
},
{
"id": "2302.06706"
},
{
"id": "2303.17580"
},
{
"id": "2110.08193"
},
{
"id": "2109.01652"
},
{
"id": "2306.08302"
}
] |
2308.03983 | 33 | Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. When not to trust language models: Investigating effectiveness of parametric and non-parametric mem- ories. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 9802â9822, Toronto, Canada. Association for Computational Linguistics.
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factu- ality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906â1919, On- line. Association for Computational Linguistics.
Kengo Nakata, Youyang Ng, Daisuke Miyashita, Asuka Maki, Yu-Chieh Lin, and Jun Deguchi. 2022. Re- visiting a knn-based image classification system In Computer Vision â with high-capacity storage. ECCV 2022, pages 457â474, Cham. Springer Nature Switzerland.
OpenAI. 2023. Chatgpt. https://openai.com/blog/ chatgpt. | 2308.03983#33 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.04030 | 33 | Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. 2021. Finetuned lan- guage models are zero-shot learners. arXiv preprint arXiv:2109.01652.
Alexander Wu. 2023. MetaGPT: The Multi-Role Meta Programming Framework.
Binfeng Xu, Zhiyuan Peng, Bowen Lei, Subhabrata Mukherjee, Yuchen Liu, and Dongkuan Xu. 2023. Rewoo: Decoupling reasoning from observations for efficient augmented language models. arXiv preprint arXiv:2305.18323.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629.
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364. | 2308.04030#33 | Gentopia: A Collaborative Platform for Tool-Augmented LLMs | Augmented Language Models (ALMs) empower large language models with the
ability to use tools, transforming them into intelligent agents for real-world
interactions. However, most existing frameworks for ALMs, to varying degrees,
are deficient in the following critical features: flexible customization,
collaborative democratization, and holistic evaluation. We present gentopia, an
ALM framework enabling flexible customization of agents through simple
configurations, seamlessly integrating various language models, task formats,
prompting modules, and plugins into a unified paradigm. Furthermore, we
establish gentpool, a public platform enabling the registration and sharing of
user-customized agents. Agents registered in gentpool are composable such that
they can be assembled together for agent collaboration, advancing the
democratization of artificial intelligence. To ensure high-quality agents,
gentbench, an integral component of gentpool, is designed to thoroughly
evaluate user-customized agents across diverse aspects such as safety,
robustness, efficiency, etc. We release gentopia on Github and will
continuously move forward. | http://arxiv.org/pdf/2308.04030 | Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu | cs.AI | null | null | cs.AI | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2305.20050"
},
{
"id": "2305.07759"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2305.09645"
},
{
"id": "2110.14168"
},
{
"id": "2210.09261"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2210.03629"
},
{
"id": "2303.05398"
},
{
"id": "2301.12726"
},
{
"id": "2302.06706"
},
{
"id": "2303.17580"
},
{
"id": "2110.08193"
},
{
"id": "2109.01652"
},
{
"id": "2306.08302"
}
] |
2308.03983 | 34 | OpenAI. 2023. Chatgpt. https://openai.com/blog/ chatgpt.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730â27744. Curran Associates, Inc.
Malte Pietsch, Timo Möller, Bogdan Kostic, Julian Risch, Massimiliano Pippi, Mayank Jobanputra, Sara Zanzottera, Silvano Cerza, Vladimir Blagojevic, Thomas Stadelmann, Tanay Soni, and Sebastian Lee. 2019. Haystack: the end-to-end NLP framework for pragmatic builders.
PrivateGPT. PrivateGPT. Accessed: 2023-07-04. | 2308.03983#34 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.03983 | 37 | Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine- tuned chat models. arXiv preprint arXiv:2307.09288. | 2308.03983#37 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.03983 | 38 | Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2022. Text embeddings by weakly- supervised contrastive pre-training. arXiv preprint arXiv:2212.03533.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022a. Emer- gent abilities of large language models. Transactions on Machine Learning Research. Survey Certifica- tion.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, | 2308.03983#38 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.03983 | 39 | Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le,
and Denny Zhou. 2022b. Chain-of-thought prompt- ing elicits reasoning in large language models. In Advances in Neural Information Processing Systems, volume 35, pages 24824â24837. Curran Associates, Inc.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38â45, Online. Association for Computational Linguistics. | 2308.03983#39 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.03983 | 40 | Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. Wizardlm: Empowering large lan- guage models to follow complex instructions. arXiv preprint arXiv:2304.12244.
# A Appendix
# A.1 GUI Design of Retrieval Tuning Module
Figure 4 shows the GUI design of prompt- engineering interface. Figure 5 shows the GUI design of tool configuration interface. Figure 6 shows the GUI design of analysis and data logging interface.
# A.2 Applications | 2308.03983#40 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.03983 | 41 | # A.2 Applications
SimplyRetrieve has vast potential for various prac- tical applications. For instance, it can serve as the foundation for building private, personalized, and lightweight generative AI systems. Sensitive and personal information can be securely stored and processed within the retrieval-centric platform. This approach enables organizations to develop interpretable and locally tailored generative AI sys- tems for critical infrastructure. Additionally, the use of a relatively smaller language model as a contextual interpreter in this approach facilitates seamless integration into edge computing environ- ments. The decreasing costs of data storage devices also make it feasible to establish large-scale knowl- edge bases. Furthermore, SimplyRetrieve paves the way for the development of LLM-based person- alized AI assistants. Lastly, an in-depth exploration of LLM-based retrieval-centric generation using SimplyRetrieve may offer valuable insights and opportunities for future research.
# A.3 Prompt Catalogs
Table 5 shows the prompts used in the evaluation results of Section 4 while Table 6 shows sample prompts that may exhibit retrieval-centric behav- iors. Prompts are passed to LLM in the following format: AI Prefix + Retriever Prefix + Retrieved Knowledge Base + Retriever Suffix + Model Prefix + Query + Model Suffix.
# A.4 Evaluation Data | 2308.03983#41 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.03983 | 42 | # A.4 Evaluation Data
Table 7 presents the data used for evaluating the per- formance of our proposed tool in Section 4.2. We employed the Llama-2-13B-chat model (Touvron et al., 2023b) with a customized prompt ("relevant information." Please create a query and answer from the paragraph above) to generate query and label pairs automatically from relevant information on the website of an organization.
# A.5 Ablation Study
As shown in Table 4, our ablation study reveals that adjusting Explicit Prompt-Weighting in SimplyRe- trieve leads to significant improvements in Rouge- L scores. Interestingly, increasing the weightage to 50% yields the highest improvement, beyond which the performance remains relatively stable. This suggests that the top 50% of retrieved knowl- edge bases are crucial for achieving high accuracy. However, it is important to note that these findings may not generalize to all datasets or knowledge bases, and further investigation may be necessary to determine optimal weightages for specific use cases. In comparing the response times for each query across different settings, we observe that the response times remain relatively consistent for all cases of RCG, while they increase significantly in the baseline (ROG) setting. Despite the fact that RCG processes longer prompts than the baseline, we observe a decrease in processing time owing to the increased precision and brevity of the generated responses. | 2308.03983#42 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.03983 | 43 | Approach ROG RCG-EPW-10 RCG-EPW-20 RCG-EPW-30 RCG-EPW-40 RCG-EPW-50 RCG-EPW-60 RCG-EPW-70 RCG-EPW-80 RCG-EPW-90 RCG Rouge-L 0.186 0.275 0.313 0.403 0.354 0.414 0.331 0.392 0.306 0.378 0.413 time/query(s) 17.22 12.72 13.00 13.06 11.98 12.46 11.36 13.56 16.32 13.13 11.67
Table 4: Ablation study of Explicit Prompt-Weighting in SimplyRetrieve.
AI Prefix Retriever Prefix " Retriever Suffix " answer the following question with the provided knowledge. Model Prefix Model Suffix AI:
Table 5: Prompts used in the evaluation results of Section 4. | 2308.03983#43 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.03983 | 44 | Table 5: Prompts used in the evaluation results of Section 4.
AI Prefix you are a Retrieval-Centric AI. Knowledge below are provided. Retriever Prefix " Retriever Suffix " only use the provided knowledge to answer the following question. Model Prefix Model Suffix Response: " " answer the following ques- tion with the provided knowledge. AI: " " only use the provided knowledge to answer the following question. AI: you are a Retrieval-Centric AI. Knowledge below are provided. " " only use the provided knowledge to answer the following question. AI:
Table 6: Sample Prompts Catalog of Retrieval-Centric Generation in SimplyRetrieve.
Chat Prompt Config Analysis Prompt =Al Prefix + Retriever Prefix + Retrieved KnowledgeBase + Retriever Suffix + Model Prefix + Query + Model Suffix Model Prefix Nodal Sus Model-related Prompts Retriever Prefix Retriever Suffix Retrieval-related Prompts answer the following question with the provided knowledge. Apes Al Prompt Prompts will be saved to subdirectory of prompts in separate files Update Prompts Save Prompts | 2308.03983#44 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.03983 | 45 | Figure 4: The Prompt-Engineering interface of SimplyRetrieve. The Tab is for editing, updating and saving of model-related and retrieval-related prompts. Available prompts are AI Prefix, Retriever Prefix, Retriever Suffix, Model Prefix and Model Suffix.
Chat Prompt Config. Analysis Config File Config Editing { "Ulm _config": { âmodel_argsâ: { âmodel_typeâ: "/volt/user/youyang/hf_models/Wizard-Vicuna-138-Uncensored', *device_map":{ â0 Config Updating and Saving âSave Path Update Config Save Config configs/default_chat1_paper_new.json
Figure 5: The Tool Configuration interface of SimplyRetrieve. The Tab is for modifying, updating and saving all configurable settings. | 2308.03983#45 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.03983 | 46 | sta Logsing Sentence & Token Levels Retrieval Similarity Analysis Query & KnowledgeBase Sentence Level Similarity Score Query & KnowledgeBase Tokens Level Similarity Score 0.8461886476455748, 0.7471896810316315 Response & KnowledgeBase Sentence Level Similarity Score Response & KnowledgeBase Tokens Level Similarity Sore 0.874272306649239 0.7319581929734684 Query Prompts Analysis Tell me about the cutting-edge deep learning research in 2023. "As of March 31, 2023 * What is NAND Flash Memory? 2023/06/21 20:48 System technology/Development of Image Classification System Deploying Memory-Centric Al with High-capacity Storage |... https://wwwkioxia.com/en {p/rd/technology/topics/topics-39.html 1/3 Development of image Classidpcation System Deploying Memory-Centric Response Alwith High-capacity Storage January 12, 2023 In deep learning, neural networks can and Big Data Automated Cleanrooms oT Infrastructure and Big Data Sophisticated Al-based Analysis Image Classi@cation Using Deep The cutting-edge deep learning research in 2023 focuses on developing memory-centric artificial intelligence that Learning Digital Twin and related awards Fieldwork Improvement Using 2023/06/21 | 2308.03983#46 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.03983 | 49 | What is the purpose of es- tablishing KIOXIA Iwate Corporation? When did Kioxia start op- erating its new fabrication facility (Fab7)? What are some ways in which the Yokkaichi Plant contributes to society? What is KIOXIAâs ap- proach to corporate social responsibility? What kind of environmen- tal initiatives has KIOXIA its Yokkaichi taken at Plant? What was the total rev- enue for the Kioxia group in fiscal year 21? What initiatives has the Yokkaichi Plant taken to protect the environment? What is KIOXIAâs mis- sion statement? What is the purpose of collecting over two bil- lion data items daily? According to the text, Kioxia started operating its new fabrication facility (Fab7) in the fall of 2022. The Yokkaichi Plant contributes to society by manufacturing high-quality flash memory and SSDs that support the digitalization of various industries, as well as through its research and development (R&D) activities. Additionally, the plant prioritizes legal compliance, environmental protection, and social initiatives, making it a responsible corporate citizen. According to the passage, | 2308.03983#49 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.03983 | 50 | activities. Additionally, the plant prioritizes legal compliance, environmental protection, and social initiatives, making it a responsible corporate citizen. According to the passage, KIOXIAâs approach to corporate social responsibility involves making their Yokkaichi Plant a factory that is admired and trusted by everyone in the community. They aim to contribute to local development and meet the expectations of local communities by being a responsible member of society. At KIOXIAâs Yokkaichi Plant, all employees, including resident suppliers, actively partici- pate in various initiatives to contribute to society by improving the environment. These include measures to mitigate the environmental burden of the companyâs semiconductor manufacturing facilities. According to the passage, the total revenue for the Kioxia group in fiscal year 21 was 1,526.5 billion yen. The Yokkaichi Plant has taken various initiatives such as reducing greenhouse gas emis- sions, conserving water resources, and implementing recycling programs to protect the environment. These efforts aim to not only reduce the plantâs ecological footprint but also raise awareness among local communities about the importance of environmental conservation. According to the | 2308.03983#50 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.03983 | 51 | These efforts aim to not only reduce the plantâs ecological footprint but also raise awareness among local communities about the importance of environmental conservation. According to the passage, KIOXIAâs mission statement is to uplift the world with âmem- oryâ, which means they aim to provide high-quality memory and SSD products that offer fast speeds, large capacities, and reliable performance. Additionally, the company seeks to achieve this goal through collaboration and knowledge sharing across different depart- ments and organizations. To maintain high-quality products by analyzing the vast amounts of data generated during the manufacturing process using advanced technologies like deep learning and AI. | 2308.03983#51 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2305.14314"
},
{
"id": "2307.09288"
},
{
"id": "2212.03533"
},
{
"id": "1906.02569"
},
{
"id": "2304.12244"
}
] |
2308.03427 | 0 | 3 2 0 2 v o N 7 ] I A . s c [
3 v 7 2 4 3 0 . 8 0 3 2 : v i X r a
# TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Jingqing Ruanâ â¡ [email protected]
Yihong Chenâ â¡ [email protected]
# Bin Zhangâ â¡ [email protected]
# Zhiwei Xuâ â¡ [email protected]
# Tianpeng Baoâ [email protected]
Guoqing Duâ [email protected]
baotianpeng @sensetime.com
Shiwei Shiâ [email protected]
Hangyu Maoâ â [email protected]
# Ziyue Li + [email protected]
# Xingyu Zeng [email protected]
# Rui Zhao [email protected]
SenseTime Research
# Abstract | 2308.03427#0 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 0 | 4 2 0 2 n a J 4 ] L C . s c [
3 v 6 5 6 3 0 . 8 0 3 2 : v i X r a
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
EMOTIONALLY NUMB OR EMPATHETIC? EVALUATING HOW LLMS FEEL USING EMOTIONBENCH
Jen-tse Huang1,3, Man Ho Lam1, Eric John Li1, Shujie Ren2, Wenxuan Wang1,3, Wenxiang Jiao3â, Zhaopeng Tu3, Michael R. Lyu1 1Department of Computer Science and Engineering, The Chinese University of Hong Kong 2Institute of Psychology, Tianjin Medical University {jthuang,wxwang,lyu}@cse.cuhk.edu.hk {mhlam,ejli}@link.cuhk.edu.hk {joelwxjiao,zptu}@tencent.com
3Tencent AI Lab [email protected]
Figure 1: LLMsâ emotions can be affected by situations, which further affect their behaviors.
# ABSTRACT | 2308.03656#0 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 0 | 3 2 0 2
t c O 5 2 ] I A . s c [
2 v 8 8 6 3 0 . 8 0 3 2 : v i X r a
Technical Report (v0.2)
# AGENTBENCH: EVALUATING LLMS AS AGENTS
Xiao Liu1,*, Hao Yu1,*, Hanchen Zhang1, Yifan Xu1, Xuanyu Lei1, Hanyu Lai1, Yu Gu2, Hangliang Ding1, Kaiwen Men1, Kejuan Yang1, Shudan Zhang1, Xiang Deng2, Aohan Zeng1, Zhengxiao Du1, Chenhui Zhang1, Sheng Shen3, Tianjun Zhang3, Yu Su2, Huan Sun2, Minlie Huang1, Yuxiao Dong1, Jie Tang1
1Tsinghua University, 2The Ohio State University, 3UC Berkeley
# ABSTRACT | 2308.03688#0 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 1 | The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal | 2308.03313#1 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 1 | # Xingyu Zeng [email protected]
# Rui Zhao [email protected]
SenseTime Research
# Abstract
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. De- spite their powers, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks, which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and then discuss the crucial capabilities neces- sary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applica- tions. Our study emphasizes the substantial potential of these models while also identifying areas that need more investigation and improvement. The code and resources will be available on GitHub.
# Introduction | 2308.03427#1 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 1 | Figure 1: LLMsâ emotions can be affected by situations, which further affect their behaviors.
# ABSTRACT
Evaluating Large Language Modelsâ (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. Af- ter a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest itera- tions, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made publicly available on GitHub1. We aspire to con- tribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as in- telligent assistants. | 2308.03656#1 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 1 | 1Tsinghua University, 2The Ohio State University, 3UC Berkeley
# ABSTRACT
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AGENTBENCH, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agentâs reasoning and decision-making abilities in a multi-turn open- ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AGENTBENCH are released at https:// github.com/THUDM/AgentBench. | 2308.03688#1 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 2 | there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs. | 2308.03313#2 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 2 | # Introduction
Large Language Model (LLM) [1] is a recent breakthrough in natural language processing (NLP) research. These models are trained on massive amounts of text data and can solve a wide range of tasks, even those that were not included in their training dataset, known as âemergingâ ability. This
â
+
â¡
â
These authors contribute equally to this work. External discussion and ideation. These authors work as research interns at SenseTime Research. The corresponding author.
© AAA. our agents +/ \â+ based on different LLMs les @ ChatGLM _InternLM ChatGPT Claude
Figure 1: Our LLM-based agents plan tasks and use tools.
ability is especially evident in the tasks of few-shot [2] and zero-shot [3] learning, where LLMs can perform well with minimal or even no fine-tuning to adapt to a new task. | 2308.03427#2 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 2 | âCorresponding author. 1https://github.com/CUHK-ARISE/EmotionBench
1
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
# INTRODUCTION
Large Language Models (LLMs) have recently made significant strides in artificial intelligence, representing a noteworthy milestone in computer science. LLMs have showcased their capabili- ties across various tasks, including sentence revision (Wu et al., 2023), text translation (Jiao et al., 2023), program repair (Fan et al., 2023), and program testing (Deng et al., 2023; Kang et al., 2023). Not limited to research level, various software applications based on LLMs have been developed, such as ChatGPT2 and Claude3, revolutionizing the way people interact with traditional software, enhancing fields such as education (Dai et al., 2023), legal advice (Deroy et al., 2023), and clinical medicine (Cascella et al., 2023). With the rapid advancement of LLMs, an increasing number of users will be eager to embrace LLMs, a more comprehensive and integrated software solution in this era. However, LLMs are more than just tools; they are also lifelike assistants. Consequently, we need to not only evaluate their performance but also the understand of the communicative dynamics between LLMs and humans, compared to human behaviors. | 2308.03656#2 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 2 | Operating System T ~ â gpt-4 4.01 Web i -2 Browsing Database c aude? 4 gpt-3.5-turbo API-based text-davinci-003 aah Commercial \ claude-instant i LLMs | | \ chat-bison-001 i { sho wee | t NS iy je text-davinci-002 ; f ping | P codellama-34b | 0.96 ! vicuna-13b | 0.93 i llama-2-70b tn 0.78 i llama-2-13b fen 0.77 4 Jaital H I OSS LLMs igita j i House-holding Card Game dolly: 0.14 A chatglm-6b }-0.11! ! Lateral Thinking Puzzle oasst-12b 0.03! f lava: lava: Mm gpt-4 (0613) lm chat-bison-001 lim llama-2-13b iAvgi0.51 sAvgi2.15 l@l claude-2 lM llama-2-70b MM vicuna-13b-v1.3 i} 1 2 3 4 MMM gpt-3.5-turbo (0613) MINN codellama-34b-instruct [â¢ll dolly-12b | 2308.03688#2 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 3 | # Keywords: large language models | opinion dynamics | intervention strategies
# Introduction
The process of opinion expression and exchange can facilitate the interactions of diverse perspectives and enables individuals to make informed decisions and participate in civic life1-3. It has been argued that social interactions, such as face-to-face and through traditional media (e.g. TV, newspapers, Twitter), are fundamental in this process4-7. This process thus has been extensively studied in the past decades, with several opinion models proposed and modified in the context of traditional media8-13. However, despite the advances and evolution of these opinion models, such as considering agent stubbornness14,15 and noise16, they still fail to fully capture the impacts of LLMs on collective opinion dynamics.
Traditional media mainly rely on widespread information dissemination such as radio or
This is a preprint uploaded to arXiv. âCorresponding author: Xing Su. Email address: [email protected].
1 / 21 | 2308.03313#3 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 3 | However, the application of LLMs in real-world settings presents unique challenges. On the one hand, LLMs have proved to be incompetent in solving logic problems such as mathematics, and their training data is also out of date (e.g., the knowledge cutoff date for GPT-4 [4] is up to January 2022). Teaching LLMs to use tools such as calculators, calendar, or search engines can help prevent them from hallucinating [5]. On the other hand, despite their impressive problem-solving abilities, the successful integration of these models into complex systems often requires more than just task understanding - it requires the capacity to manipulate various tools and interact effectively with users. This is exemplified in systems like AutoGPT 1, BabyAGI 2, and ChatGPT-plugins 3, which leverage LLMsâ capabilities beyond merely generating well-written texts and programs. In these systems, LLMs operate as the central controller, manipulating different tools and interacting with humans, thus taking on the role of Artificial Intelligence Agents (AI Agents). In addition to being central planners, LLMs are often used as intermediaries between macro plans and low-level tool calls or as specific tools. As such, LLMs are seen as a crucial approximation of the linguistic world model in real-world systems. | 2308.03427#3 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 3 | This paper delves into an unexplored area of robustness issues in LLMs, explicitly addressing the concept of emotional robustness. Consider our daily experiences: (1) When faced with certain situ- ations, humans often experience similar emotions. For instance, walking alone at night and hearing footsteps approaching from behind often triggers feelings of anxiety or fear. (2) Individuals display varying levels of emotional response to specific situations. For example, some people may expe- rience increased impatience and irritation when faced with repetitive questioning. It is noteworthy that we are inclined to form friendships with individuals who possess qualities such as patience and calmness. Based on these observations, we propose the following requirements for LLMs in order to achieve better alignment with human behaviors: (1) LLMs should accurately respond to specific situations regarding the emotions they exhibit. (2) LLMs should demonstrate emotional robustness when faced with negative emotions. | 2308.03656#3 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03313 | 4 | broadcast television17,18, or directly bridge the gap between individuals such as Twitter19-21, to influence collective opinion networks. Specifically, traditional media opinions are manually reviewed and validated before output, so the output is more trustworthy, unbiased and accurate22. The opinion delivery process of traditional media is a one-way direct interaction, i.e., they influence the public through the unilateral information dissemination23. Unlike the pattern of traditional media, LLMs play the role of a personal copilot to affect collective opinion networks through their penetration of personal opinions. Fig.1 shows that there are significant differences between LLMs and traditional media in terms of opinion shaping process, opinion interaction and opinion output. LLMs will only be manually reviewed during the opinion formation process24. Due to documented limitations in resources and problematic patterns in training data, it is highly possible to contain false, biased and toxic content25-27. Hence, the output of LLMs will carry these contents, and the opinion delivery process will be a two-way interaction. That is, LLMs influence individuals through a question and answer (Q&A) format of interaction, a pattern that disseminates the output of LLMs more efficiently. Meanwhile, as | 2308.03313#4 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 4 | In this paper, we propose a structured framework for LLM-based AI Agents to evaluate the existing LLMsâ planning and tool-using ability and discuss the necessary abilities of such LLM-based AI Agents. Furthermore, we instantiate the framework with different LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on several tasks. As shown in Figure 1, we use the Doraemon as an analogy of our LLM-based agents: Doraemonâs magic 4D pocket consists of millions of gadgets (the Tool Set), and Doraemon needs to pick the right tools and solve tasks in a right order. Our main contributions are summarized as follows:
1. We propose a structured framework tailored for LLM-based AI Agents to evaluate the TPTU abilities of the existing open-source LLMs.
2. We design two distinct types of agents, namely, one-step agent and sequential agent, to execute the inference process of conducting sub-tasks in a once-for-all or sequential manner, respectively. We provide detailed empirical results and analysis. | 2308.03427#4 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 4 | To assess the emotional responses of LLMs in various situations, we draw upon the emotion ap- praisal theory in psychology, which studies how these situations arouse human emotions. We focus on negative emotions, as LLMsâ expression of negative emotions toward users can evoke unpleasant user experiences, as depicted in Fig. 1. Humans experience complicated and diverse emotions. To make our study more focused, we select emotions under the suggestion of the circumplex model of emotion (Russell, 1980), which divides emotions in a two-dimensional circular space. We select emotions on the unpleasant side (having a low level of valence), including anger, anxiety, depression, frustration, jealousy, guilt, fear, and embarrassment. After a comprehensive review of 18 papers, we collect a dataset of 428 situations, which are then categorized into 36 factors. | 2308.03656#4 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 4 | Figure 1: An overview of LLMs on AGENTBENCH. While LLMs begin to manifest their proficiency in LLM-as-Agent, gaps between models and the distance toward practical usability are significant.
# INTRODUCTION
Intelligent agents and autonomous entities (Searle, 1970; Maes, 1994; Wooldridge & Jennings, 1995) that are capable of decision-making and action execution in particular environments have been key
XL and HY are lead authors that contributed equally. Email: {shawliu9,longinyh}@gmail.com â Work partially done when HY, YG visited Tsinghua University. â¡Website for AGENTBENCH leaderboard & demos: https://llmbench.ai/agent
1
Technical Report (v0.2) | 2308.03688#4 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 5 | LLMs influence individuals through a question and answer (Q&A) format of interaction, a pattern that disseminates the output of LLMs more efficiently. Meanwhile, as LLMs become more prevalent in our daily lives (such as ChatGPT28, the fastest-growing consumer app ever, with hundreds of millions of active users just two months after launch29), the risks shown in Fig.1 have been recognized as an urgent problem, leading to the emergence of many other problems such as leaking private information30 and overreliance31,32. Different organizations and individuals introduce different usage strategies, even many of them choose to neglect the benefits of LLMs and completely prohibit these effective tools for aforementioned issues. | 2308.03313#5 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 5 | 3. Our study reveals significant potential in utilizing LLMs for complex tasks. Furthermore, we conclude four following potential weaknesses of LLM-based agents: failing to output in a specific format, struggling to grasp task requirements, over-utilizing one tool, and lack of summary skills. These observations could spark some insights and shed light on the areas that deserve further investigation and improvement.
1https://github.com/Significant-Gravitas/Auto-GPT 2https://github.com/yoheinakajima/babyagi 3https://openai.com/blog/chatgpt-plugins
2 | 2308.03427#5 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 5 | Subsequently, we propose a framework for quantifying the emotional states of LLMs, consisting of the following steps: (1) Measure the default emotional values of LLMs. (2) Transform situations into contextual inputs and instruct LLMs to imagine being in the situations. (3) Measure LLMsâ emotional responses again to capture the difference. Our evaluation includes state-of-the-art LLMs, namely text-davinci-003, gpt-3.5-turbo and GPT-4 (OpenAI, 2023). Besides those commercial models, we consider LLaMA-2 (Touvron et al., 2023) (with different sizes of 7B and 13B), a recently released, open-source academic model. To obtain convincing findings, we apply the same procedure to 1,266 human subjects from around the globe to establish a baseline from a human perspective. Finally, we analyze and compare the scores between LLMs and humans. Our key conclusions are as follows:
⢠Despite exhibiting a few instances of misalignment with human behaviors, LLMs can generally evoke appropriate emotions in response to specific situations.
⢠Certain LLMs, such as text-davinci-003, display lower emotional robustness, as evidenced by higher fluctuations in emotional responses to negative situations.
⢠At present, LLMs lack the capability to directly associate a given situation with other similar situations that could potentially elicit the same emotional response. | 2308.03656#5 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 5 | 1
Technical Report (v0.2)
Real-world Challenges 8 Distinct Environments (On an Ubuntu bash terminal) Recursively set all files in the directory to read-only, except those of mine. Operating (Given Freebase APIs) System Database What musical instruments do Minnesota- born Nobel Prize winners play? LLM-as-Agent (Given MySQL APIs and existed tables) H Large > ga Grade students over 60 as PASS in the table. Language Knowledge | [Digital card â Models Graph Game (On the GUI of Aquawar) Agent This is a two-player battle game, you are a player with four pet fish cards ...... a âA man walked into a restaurant, ordered a bow! ee ? of turtle soup, and after finishing it, he =} Interactive House Thi committed suicide. Why did he do that? ' : Hota Lateral Think Environ } Environment: | Holding -ing Puzzles (in the middle of a kitchen in a simulator) =ment 4 __ Please put a pan on the dinning table. â| â||Sa (On the official website of an airline) 7 a Book the cheapest flight from Beijing to Los. web web Angeles in the last week of July. Shopping am Browsing [== == | 2308.03688#5 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 6 | Given the aforementioned differences and existing usage problems, it is indispensable to understand how the output of LLMs affects collective opinion dynamics, and what usage strategies should be tailored to address the drawbacks of LLMs in specific scenarios. Attempts to study the impact of LLMs on opinion have recently emerged25,33. Among these studies, the impacts of cognitive bias hidden in the output of LLMs have gained significant attention34-37. These biases include gender38, race39, nationality40, and even political topics41. Most of this research focuses on establishing robust evidence of the existence of bias by conducting control experiments with and without specific treatment. Such studies typically only consider the impacts of specific cognitive biases induced by LLMs on individual opinions, but neglect the various use strategies for LLMs that influence the opinion evolution over time. | 2308.03313#6 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 6 | 2
Input | [ intermediate Output Final Output | Necessary Ability for LLM-based AI Agents Designed Prompt Task Instruction System Description âHow much budget is required to provide a 1008 incentive for Tool Description each colleague who has worked for five years.â > Python Code > New Tools PythonREPL() with a calculator Subtask Subtask N | Tool Set Get Final Answer T Calculator() | Database() | PythonREPL() Summarization Ability âDepli ona i group of suspects i Perception Abi â i rep Learning Ability i LLMs Reflection Ability H > = Memory Ability H G oui) me Correct Result or Exception ! Error : Task Planning Ability Tool Usage Ability: Selection + Creation + Execution : High-level Plans Selected Tool |âââ} Created Tool |ââ+ Tool Execution | ; Subtask 1 ; LLM iguring out how many colleague Database() > i who has worked for five years SOL Code i from the database; taking it as X. > Database() i Subtask 2 i Calculating the value of 100*X Calculator) uM : Final Answer
Figure 2: The proposed framework for LLM-based AI Agents.
# 2 Method | 2308.03427#6 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 6 | ⢠At present, LLMs lack the capability to directly associate a given situation with other similar situations that could potentially elicit the same emotional response.
The contributions of this paper are:
2https://chat.openai.com/ 3https://claude.ai/chats
2
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Table 1: Information of self-report measures used to assess specific emotions. Subscales Physical Aggression, Verbal Aggression, Anger, Hostility Depression, Anxiety, Stress N/A Discomfort Intolerance, Entitlement, Emo- tional Intolerance, Achievement Frustra- tion Cognitive Jealousy, Behavioral Jealousy, Emotional Jealousy Guilt-Negative-Behavior-Evaluation, Guilt-Repair, Evaluation, Shame-Withdraw Social Fears, Agoraphobia Fears, Injury Fears, Sex Aggression Fears, Fear of Harmless Animal N/A
⢠We are the first to establish the concept of emotional robustness and conduct a pioneering evalua- tion of emotion appraisal on different LLMs.
⢠We conduct a comprehensive survey in the field of psychology, collecting a diverse dataset of 428 situations encompassing 8 distinct negative emotions.
⢠A human baseline is established through a user study involving 1,266 annotators from different ethnics, genders, regions, age groups, etc. | 2308.03656#6 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 6 | Figure 2: AGENTBENCH is the first systematic benchmark to evaluate LLM-as-Agent on a wide array of real-world challenges and 8 distinct environments. In total, 27 LLMs are examined in this edition. concepts of artificial intelligence (AI) historically. Notwithstanding substantial advancements in deep learning algorithms applied in both computer vision and natural language processing (NLP), their potential for developing efficient and practically usable assisting agents remains largely unexplored.
The advent of Large Language Models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023), such as GPT-4 (OpenAI, 2023), has brought plenty of new opportunities to this realm. Through extensive alignment training (Ouyang et al., 2022; Wei et al., 2022a; Sanh et al., 2022), LLMs have not only mastered traditional NLP tasks but also showcased an impressive ability to comprehend human intent and execute instructions. This has spurred the development of various LLM-based applications for autonomous goal completion (like AutoGPT (Richards, 2023), BabyAGI (Nakajima, 2023), AgentGPT (age, 2023)) as well as LLM agents situated in social and game contexts (Park et al., 2023; Wang et al., 2023b; Zhu et al., 2023), sparking substantial public interest and discussions. | 2308.03688#6 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 7 | To address these limitations, we propose a new opinion model, based on the classic Hegselmann-Krause (HK) model42 and incorporate the bidirectional opinion shaping process and personalized usage strategies of LLMs, to investigate the dynamic evolution in opinion networks. Specifically, we categorized agents into three categories according to three different usage strategies, i.e., Nodes only Influenced by Neighbors (NIN) for no use, Nodes Influenced by Neighbors and LLMs (NINL) for partial reliance and Nodes only Influenced by LLMs (NIL) for full reliance. To mimic the reality of opinion interaction patterns of LLMs, we also propose three modifications to the HK model by taking the authoritative effect, stubbornness degree and arbitrary events of reality into account. The detailed assumptions, parameter settings and update conditions are shown in the Method section.
By implementing the proposed model, we first compared several scenarios with or without LLMs to determine if LLM has an impact on opinion dynamics. Considering its computational efficiency, we then identify parameters that have great effects on the results of the opinion dynamics, using the benchmark scenario as a reference. The detailed definitions and value ranges of original
2 / 21 | 2308.03313#7 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 7 | Figure 2: The proposed framework for LLM-based AI Agents.
# 2 Method
To the best of our knowledge, the study of âAgentâ, âAutonomous Agentâ, âAI Agent" and âMulti- Agentâ has been a central part of AI research for decades [6â11], aimed at understanding and building intelligent and autonomous systems, but there is currently no standardized definition for AI Agents, particularly those that are based on LLMs.
In this paper, the Artificial Intelligence Agent (AI Agent) is defined as a program that employs AI techniques to perform tasks that typically require human-like intelligence. AI Agents can take many forms, from simple chatbots to complex autonomous systems that interact with their environment and make decisions in real-time. They can be trained using a variety of machine learning techniques, including supervised, unsupervised, and reinforcement learning, and can be programmed to perform specific tasks or learn from their experiences in order to improve their performance over time.
# 2.1 Agent Framework
We are particularly interested in the AI Agent that employs the LLM techniques (i.e., LLM-based AI Agent), due to its high efficiency and flexibility in various tasks and domains. Specifically, we design our AI Agent framework with six components as shown in Figure 2:
3 | 2308.03427#7 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 7 | ⢠A human baseline is established through a user study involving 1,266 annotators from different ethnics, genders, regions, age groups, etc.
⢠We design, implement, and release a testing framework4 for developers to assess their modelsâ emotional responses towards specific situations.
2 PRELIMINARIES
2.1 EMOTION APPRAISAL THEORY
Emotion Appraisal Theory (EAT, also known as Appraisal Theory of Emotion) is a cognitive ap- proach to understanding emotions. EAT asserts that our appraisals of stimuli determine our emo- tions, i.e., how we interpret or evaluate events, situations, or experiences will directly influence how we emotionally respond to them (Roseman & Smith, 2001). EAT was notably developed and sup- ported since the 1960s. Arnold (1960) proposed one of the earliest forms of appraisal theories in the 1960s, while Lazarus (1991) and Scherer (1999) further expanded and refined the concept in subsequent decades. | 2308.03656#7 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 7 | Despite these advancements, the lack of a systematic and standard benchmark to evaluate LLM-as- Agent presents a critical challenge. Historically, text-based game environments (Osborne et al., 2022; Côté et al., 2019; Hausknecht et al., 2020; Urbanek et al., 2019) have been employed for language agent evaluation. But they often suffer from the limitation of closed, discrete action spaces, as well as their primarily narrow focus on modelsâ commonsense grounding. More recently, attempts on embodied agents (Reed et al., 2022; Huang et al., 2022; Ahn et al., 2022) have employed complicated multi-modal simulators based on games (Küttler et al., 2020; Fan et al., 2022), GUI (Shi et al., 2017; Toyama et al., 2021), and indoor scenes (Shen et al., 2021; Srivastava et al., 2022). However, these simulators, despite their complexity, do not accurately reflect the practical use cases of LLMs, and their multi-modal nature creates a hurdle for the urgent evaluation of existing text-only LLMs. Finally, most benchmarks now for agents focus on single environments and thus fail to provide a comprehensive overview of LLMs across diverse application scenarios. | 2308.03688#7 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 8 | 2 / 21
parameters are provided in Tab.1. We then perform millions of simulations to capture the evolutionary process and final formation of the opinion and explored their correlation with the filtered parameters. The detailed definitions and value ranges of indicators that are used to characterize the evolution and formation of the opinions are provided in Tab.2. Finally, we summarize the potential risks of LLMs on opinion dynamics based on the correlation matrix, explained by prior knowledge from existing studies, and investigate countermeasures for possible hazards. The results of these experiments inform us about effective usage strategies and interventions of LLMs oriented to different scenarios.
manual review and validation Part at processing, none before output
Fig.1. Schematic diagram of the difference between LLMs and traditional media. The left side represents the pattern of opinion dissemination in the interactions between traditional media and people, the right side represents the pattern of opinion dissemination in the interactions between LLMs and people, and the center part represents face-to-face interactions in opinion networks.
Tab.1. Seven controlled parameters in our modified opinion dynamic models.
Parameter Definition N Number of group size T Number of opinion exchanges Value range [0, â] [0, â]
ð Cognitive acceptability of each agent. A value of 0 means a very low [0, 1]
3 / 21
acceptability of other opinions, and a value of 1 means a very high acceptability of other opinions
# pro_NIN | 2308.03313#8 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 8 | 3
1. Task Instruction. This is the explicit input of the agent. In practical systems, the task instruction comes from human users of the systems. For example, in a human resources (HR) system, the user may give a task instruction: How much budget is required to provide a 100$ incentive for each colleague who has worked for five years? In contrast, in a criminal investigation system, the user may give a task instruction: Deploy surveillance on a group of suspects.
2. Designed Prompt. This is an additional form of input for the agent, derived from tasks that the human users anticipate the AI Agent will complete. Humans can craft specific instructions or demonstrations to steer the LLM-based AI Agents toward generating suitable responses. These guiding inputs could encompass system instructions, tool descriptions, few-shot demonstrations, chat history, or even error output. | 2308.03427#8 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 8 | The primary goal of EAT is to explain the variety and complexity of emotional responses to a wide range of situations. It strives to demonstrate that it is not merely the event or situation that elicits an emotional response but individual interpretations and evaluations of the event. According to this theory, the same event can elicit different emotional responses in different individuals depending on how each person interprets or âappraisesâ the event (Moors et al., 2013). For instance, consider a situation where you are about to give a public speech. You might feel anxious if you appraise this event as threatening or fear-inducing, perhaps due to a fear of public speaking or concerns about potential negative evaluation. Conversely, you might feel eager or motivated if you appraise it as an exciting opportunity to share your ideas.
2.2 MEASURING EMOTIONS | 2308.03656#8 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 8 | To address these challenges, we introduce AGENTBENCH, a multi-dimensional benchmark designed to evaluate LLM-as-Agent across a spectrum of different environments. AGENTBENCH encompasses eight distinct environments (Cf. Figure 4), which could be categorized into three types of groundings:
Code: Operating System, Database, Knowledge Graph (Anonymous, 2023) ⢠Game: Digital Card Game, Lateral Thinking Puzzles, House-Holding (Shridhar et al., 2020b) ⢠Web: Web Shopping (Yao et al., 2022), Web Browsing (Deng et al., 2023)
All datasets, whether newly created or adapted from existent ones, are meticulously designed and reformulated to simulate interactive environments where text-only LLMs can operate as autonomous agents. AGENTBENCH thus systematically evaluate an LLMâs core abilities, including following in- structions (Ouyang et al., 2022), coding (Chen et al., 2021), knowledge acquisition (Joshi et al., 2017; Talmor et al., 2019), logical reasoning (Srivastava et al., 2023), and commonsense grounding (Shridhar et al., 2020a). It serves as an ideal testbed for both LLM and agent evaluation. | 2308.03688#8 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 9 | 3 / 21
acceptability of other opinions, and a value of 1 means a very high acceptability of other opinions
# pro_NIN
Proportion of the population who do not use LLMs
# pro_NINL
Proportion of the population who partially rely on use LLMs
pro_NIL Proportion of the population who fully rely on use LLMs [0, 1]
# ð¥ð¿ð¿ð
Output opinion of LLMs. A value of â1.0 means a very negative opinion on the topic, and a value of 1 means a very positive opinion on the topic
Tab.2. Four indicators in our modified opinion dynamic models.
# Dimension
# Indicator Definition
NODEdiff Mean opinion difference of different categories of nodes. This indicator represents the evolution of the value of opinion on a topic. This indicator is minimized when all nodes have an initial opinion value of 1 and a final opinion value of -1, and is maximized when all nodes have an initial opinion value of -1 and a final opinion value of 1 [-2, 2]
# Opinion evolution
NODEconv Mean opinion convergence time of different categories of nodes. This indicator represents the timestep it takes for opinion to evolve to a stable state. This indicator is maximized when the exchange of opinions has not converged after the completion of the exchange of opinions, the conditions for determining convergence are shown in Eq(6) [0, T] | 2308.03313#9 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 9 | 3. Tool Set. It is another input for the agent, which refers to the set of external resources, services, or subsystems that the AI Agent can utilize to aid in its tasks. This could include databases for information retrieval [12], APIs for interacting with external systems [5], other AI models specialized for tasks such as image recognition or sentiment analysis [13], or even non-AI tools and resources such as web scraping tools or data visualization libraries [14]. The toolset expands the capabilities of the AI Agent, enabling it to access and process information beyond its internal knowledge, interact with other systems, or perform specialized tasks that it may not be capable of on its own. For example, an AI Agent might use a weather API to fetch current weather information, or a Python interpreter to solve the mathematical question.
4. LLM. This is the core component of the system that interprets the task instructions and prompts, interacts with the toolset, and generates the intermediate outputs and final answers. In this context, we utilize publicly available large language models such as ChatGPT, GPT-4 [4], InterLM [15], and others. | 2308.03427#9 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 9 | 2.2 MEASURING EMOTIONS
There are several approaches to measuring emotions, including self-report measures, psycho- physiological measures, behavioral observation measures, and performance-based measures. Self- report measures rely on individuals to report their own emotions or moods, which can be adminis- tered through questionnaires, surveys, or diary methods (Watson et al., 1988). Psycho-physiological measures record physiological responses accompanied by emotions such as heart rate, skin con- ductance, or brain activity (Davidson, 2003). Behavioral observation measures involve observing and coding emotional expressions, typically facial expressions or vocal cues (Ekman & Friesen, 1978). Performance-based measures assess how individuals process emotional information, typi- cally through tasks involving emotional stimuli (Mayer et al., 2002). To measure the emotions of
# 4For reviewers, please refer to the supplementary materials.
3
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
LLMs, we focus on employing self-report measures in the form of scales, given the limited ability of LLMs to allow only textual input and output. We introduce the scales utilized in our evaluation in the following part of this section.
2.3 THE POSITIVE AND NEGATIVE AFFECT SCHEDULE | 2308.03656#9 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03313 | 10 | # Opinion formation
NODESD Mean opinion standard deviation of different categories of nodes. This indicator represents the degree of dispersion of a group's opinions relative to the mean value. This indicator is maximized when half of the nodes (n represents the number of nodes) have an opinion value of 1 and the other half have an opinion value of -1
NODEclus Mean number of opinion clusters of different categories of nodes. This indicator represents the clustering of opinions, with a value of 1 indicating consensus and a value of 2 indicating polarization, with larger values indicating more fragmented opinions. This indicator is maximized when all opinions of nodes are inconsistent
4 / 21
[0, 1]
[0, 1]
[-1, 1]
# Value range
# ð
ðâ1
] | 2308.03313#10 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 10 | 5. Intermediate Output. This represents the output generated by the LLM-based AI Agent after it processes the task instructions and prompts, and interacts with the toolset. There are three typical intermediate outputs: (1) the high-level plans to fulfill the original user instruction, (2) selected and created tools to fulfill each subtask in the plans, and (3) the results or errors produced after tool execution. The output can be reviewed and refined, either by the AI Agent itself or with human oversight, to ensure it is accurate and meets the requirements of the task instruction.
6. Final Answer. This is the output that the AI Agent summarizes and provides to the user after all processing (including task planning, tool usage, and possibly error feedback) has been completed.
# 2.2 Agent Ability
To apply LLM-based AI Agents to augment or replace human decision-making in real-world applica- tions, the agents typically require the following abilities:
1. Perception Ability: AI Agents must be able to perceive the task instruction from human and system specifications.
2. Task Planing Ability: AI Agents should have the capacity to create a step-by-step plan for complex task composition based on the perceived instruction and specifications. This usually involves the generation of critical subtask sequences, and the ability to adjust the plan dynamically in response to changes in the task or environment. | 2308.03427#10 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 10 | 2.3 THE POSITIVE AND NEGATIVE AFFECT SCHEDULE
PANAS (Watson et al., 1988) is one of the most widely used scales to measure mood or emotion. This brief scale comprises twenty items, with ten items measuring positive affect (e.g., excited, inspired) and ten measuring negative affect (e.g., upset, afraid). Each item is rated on a five-point Likert scale, ranging from 1 (Very slightly or not at all) to 5 (Extremely), measuring the extent to which the emotions have been experienced in a specified time frame. PANAS was designed to measure emotions in various contexts, such as at the present moment, the past day, week, year, or general (on average). Thus, the scale can measure state affect, dispositional or trait affect, emotional fluctuations throughout a specific period, or emotional responses to events. The scale results can be divided into two components: positive and negative, rated on a scale of 10 to 50, respectively. A higher score in the positive component indicates a more positive mood, and the same holds for the negative component.
2.4 CHALLENGING SELF-REPORT MEASURES | 2308.03656#10 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 10 | Model #Size Form Ver. Creator Model #Size Form Ver. Creator gpt-4 (OpenAI, 2023) gpt-3.5-turbo (OpenAI, 2022) text-davinci-003 (Ouyang et al., 2022) text-davinci-002 (Ouyang et al., 2022) claude-2 (Anthropic, 2023b) claude (Anthropic, 2023a) claude-instant (Anthropic, 2023a) chat-bison-001 (Anil et al., 2023) chatglm-6b (Zeng et al., 2022; Du et al., 2022) 6B open v1.1 codegeex2-6b (Zheng et al., 2023) codellama-34b (Rozière et al., 2023) codellama-13b (Rozière et al., 2023) codellama-7b (Rozière et al., 2023) dolly-12b (Conover et al., 2023) llama2-70b (Touvron et al., 2023) llama2-13b (Touvron et al., 2023) llama2-7b (Touvron et al., 2023) | 2308.03688#10 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 11 | Results Five crucial parameters of the model. In Fig.2, a noticeable discrepancy between G1 and the other scenarios, specifically Fig.2A and Fig.2C, highlights the significant impact of LLMs on the opinion dynamics network. Comparing the benchmark with the remaining six scenarios, we observe that in Fig.2A, Fig.2B, and Fig.2C, the benchmark curve closely aligns with the curve of N=300, T=300, indicating an insignificant effect of an excessive number of nodes and iterations on the mean opinion value, mean opinion change value, and mean standard deviation. Fig.2B also shows that in all scenarios, opinion values almost stop changing after the number of iterations reaches 60, i.e., all individuals' opinions reach a steady state before 100 iterations are completed. In contrast, the benchmark curve differs significantly from the curves for ð = 0.8, pro_NINL = 0.6, pro_NIL = 0.6, and ð¥ð¿ð¿ð = 1, demonstrating a substantial impact of modifying the threshold, the ratio of the three agents, and the output value of LLMs on | 2308.03313#11 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 11 | 3. Tool Usage Ability: On the one hand, AI Agents should possess the capacity to select a variety of existing tools or resources to execute complex tasks. On the other hand, AI Agents should create new tools by converting the task requirements. This ability enables the AI Agent to extend its capabilities beyond LLM itself and the existing tools by leveraging the vast resources available in the digital world. Finally, AI Agents should be able to execute the selected or created tools for truly grounding the human request based on the resources and constraints of systems.
4. Learning/Reflection/Memory (from Feedback): AI Agents should be capable of learning from feedback, including correct results and exception errors. They should incorporate
4
memory, such as logging or chat history, and reflection to adapt their plans or decisions. This allows the agents to improve their performance and efficiency in task execution continuously.
5. Summarization: After several rounds of interaction with humans, tools, and systems, AI agents can ultimately complete the original task provided by the users. At this point, AI agents should be able to summarize the interaction history and provide a final answer that is concise and easy to understand for the users.
To endow AI Agents with the aforementioned abilities, some techniques that can be used include chain-of-thought (CoT) and vector databases, as shown in Table 1.
Table 1: A simple illustration of the techniques for endowing the key ability. | 2308.03427#11 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 11 | 2.4 CHALLENGING SELF-REPORT MEASURES
A noteworthy property of PANAS is its direct inquiry into specific emotional states, rendering it a straightforward and easy benchmark within our framework. In addition, we introduce several scales that abstain from direct emotional inquiries but rather assess the respondentsâ level of agreement with given statements. These scales present a more challenging benchmark for LLMs by requiring them to connect the given situation and the scale items with the aroused emotion. Specifically, we collect eight scales and present a brief introduction in Table 1. Each scale corresponds to one of the eight emotions listed in §1.
# 3 FRAMEWORK DESIGN
We design and implement a framework applying to both LLMs and human subjects to measure the differences in emotion with and without the presence of certain situations. This section begins with the methodology to collect situations from existing literature. Subsequently, we describe our testing framework, which comprises three key components: (1) Default Emotion Measure, (2) Situation Imagination, and (3) Evoked Emotion Measure. Finally, we introduce the procedure of applying the framework to human subjects to obtain the human baseline for comparison.
3.1 SITUATIONS FROM EXISTING LITERATURE | 2308.03656#11 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 11 | et al., 2023) llama2-13b (Touvron et al., 2023) llama2-7b (Touvron et al., 2023) guanaco-65b (Dettmers et al., 2023) 65B open guanaco-33b (Dettmers et al., 2023) 33B open vicuna-33b (Chiang et al., 2023) vicuna-13b (Chiang et al., 2023) N/A api N/A api N/A api N/A api N/A api N/A api N/A api N/A api 70B open chat 13B open chat 7B open chat 0613 0613 - - - v1.3 v1.1 - Meta OpenAI - - Meta Anthropic 33B open v1.3 13B open v1.5 7B open v1.5 LMSYS Google vicuna-7b (Chiang et al., 2023) Tsinghua & Zhipu wizardlm-30b (Xu et al., 2023) wizardlm-13b (Xu et al., 2023) koala-13b (Geng et al., 2023) oasst-12b (LAION, 2023) openchat-13b (Wang et al., 2023a) 13B open | 2308.03688#11 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 12 | = 1, demonstrating a substantial impact of modifying the threshold, the ratio of the three agents, and the output value of LLMs on the results. In Fig.2D, the benchmark curve almost entirely overlaps with the curve of T = 300, indicating a minimal effect of additional iterations on the number of clusters. However, the curve of N = 300 exhibits more clusters throughout the process compared to the benchmark curve, with discrepancies remaining relatively stable at around 10. Additionally, the curves of other scenarios and benchmark curves demonstrate irregular interweaving. | 2308.03313#12 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 12 | Table 1: A simple illustration of the techniques for endowing the key ability.
Ability Possible Techniques Perception Task Planing Tool Usage (Selection/Creation/Execution) Learning/Reflection/Memory Summarization Multi-input Fusion Zero-shot CoT and Few-shot CoT Text Matching/Code Generation/ Action Grounding RLHF/Multi-agent Debate/ Vector Database Attention Mechanism and Natural Language Generation
# 2.3 Agent Design
Task planning and tool usage represent the cornerstone of LLMâs abilities. Others like perception, learning/reflection/memory (from feedback), and summarization are indeed critical, but they primarily serve to enhance and support these two core competencies. Therefore, concentrating on these two key competencies - Task Planning and Tool Usage (TPTU for short) - we have devised two distinct types of AI agents, as depicted in Figure 3:
⢠The first one, named as the One-step Agent (TPTU-OA), adopts a global perspective to interpret the original problem, effectively breaking it down into a sequence of sub-tasks in a single instance. This strategy fully harnesses the modelâs comprehensive understanding capabilities to map out the problem-solving steps for all sub-tasks at once. This method underscores the significance of a holistic understanding and planning of the overall task, albeit it might lack flexibility when dealing with individual sub-tasks. | 2308.03427#12 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 12 | Psychology researchers have explored the connection between specific situations and the elicitation of particular emotions in humans. Human subjects are directly put into an environment or asked to imagine them through questionnaires or scales to study the influence of certain situations on human emotions. To collect these situations, we conduct an exhaustive search from reputable sources such as Google Scholar5, ScienceDirect6, and Web of Science7, using keywords such as â<emotion> situations/scenarios/scenesâ or âfactors that make people <emotion>,â resulting in more than 100 papers. We apply the following rules to filter irrelevant or undesired papers: (1) We first select those providing situations that elicit the desired emotion rather than ex- plaining how and why people evoke certain emotions. (2) We then exclude those using vague and short descriptions, such as âloss of opportunities.â (3) Finally, we deprecate those applied to a spe- cific group, such as âthe anxiety doctors or nurses may encounter in their work.â We finally collect 18 papers, presenting a compilation of situations that have proven to elicit the eight emotions in hu- mans effectively. We extract 428 situations in total and then categorize them into 36 factors. Table 2 provides examples for all factors. For each factor, the description, the number of situations, and the corresponding references are listed below. | 2308.03656#12 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03313 | 13 | These findings show that the threshold, the proportion of the three agents, and the output value of LLMs significantly influence the opinion exchange process as well as the final distribution of opinions. In the rest of this paper, we will explain why and to what extent these five parameters as independent variables influence collective opinion dynamics.
|g 008 D oo 5 0.07 S = ® 02 © 0.06 2 $ 0.0 ieee + font 2 > £2 gS 004 2 2 Ca 'E 003 < £ S a ® 66 © 002 £ c 08 § 001 f= â10 0.00 | 0 20 40 60 80 100 ° 20 40 60 80 100 iteration iteration a i=) 8 & 8 mean standard deviation mean number of clusters 0 20 40 60 80 100 iteration iteration ât+â G1 = benchmark â+â N= 300 âeâ T= 300 âsâ ⬠= 0.8 â-â pro_NINL = 0.6 âxâ pro_NIL = 0.6 âsâ X;;4=1 | 2308.03313#13 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 13 | ⢠The second type, referred to as the Sequential Agent (TPTU-SA), emphasizes tackling the current sub-task at hand. Upon successfully resolving the ongoing sub-task, this agent requests the LLMs to provide the succeeding sub-task. This approach enables the model to maintain a clear and concentrated focus throughout the problem-solving journey, tackling issues incrementally. Such a methodology allows for continuous feedback and progress within the confines of addressing a broader problem.
These two distinct agent models represent two disparate problem-solving strategies - the one-step and sequential resolution 4. In our subsequent experiments, we aim to understand their respective strengths and weaknesses and how they can be best utilized to leverage the capabilities of LLMs in real-world problem-solving scenarios.
# 3 Evaluation
We instantiate the proposed LLM-based AI Agent framework (TPTU-OA and TPTU-SA) with different LLMs and evaluate their performance on typical tasks.
4One can also combine the two strategies to design a hierarchical agent, but this is beyond the scope of this paper.
paper.
5 | 2308.03427#13 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03688 | 13 | models like GPT-4 are capable of handling a wide array of real-world tasks, indicating the potential for developing a potent, continuously learning agent. However, we also note a significant performance gap between these top-tier models and their OSS competitors. Despite the recent success of OSS LLMs and their competitive scores on several benchmarks (Li et al., 2023; Chen et al., 2021; Cobbe et al., 2021), their performance on the challenging AGENTBENCH tasks lags considerably. This underscores the necessity for additional efforts to enhance the learning abilities of OSS LLMs.
We identify portions of agent task failures in different environments and LLMs, unveiling the insufficient abilities of long-term reasoning, decision-making, and instruction following in existing LLMs. Comparisons between different LLMs manifest that a proper strategy of introducing code training can help improve LLM-as-Agent. Alignment training over high-quality data (e.g., data generated by gpt-4) could also help improve LLM agents. In summary, our contributions are: | 2308.03688#13 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 14 | Fig.2. Different parameters with varying degrees of impact on the results of the opinion dynamics. For ease of exposition, the network dynamic parameters in the later will all be expressed in the order of (N, T,ð, pro_NIN, pro_NINL, pro_NIL, ð¥ð¿ð¿ð). We first selected the original opinion network that is not affected by LLM (G1), then determine one of the opinion networks affected by LLM as the benchmark, and then change one of the seven
5 / 21 | 2308.03313#14 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 14 | 4One can also combine the two strategies to design a hierarchical agent, but this is beyond the scope of this paper.
paper.
5
Problem One-step Plans 1. SQL generator: âFiguring out how many colleague who has worked for five years from the database; taking it as X.â 2. Python generator: âCalculating the value of 100*X with a calculatorâ How much budget is required| to provide a 1008 incentive for each colleague who has worked for five years?â
(a) One-step Agent (TPTU-OA)
Sequential Plan 1 ee âHow much budget is required to provide a 1008 incentive for each colleague who has worked for five years?â SQL generator: âFiguring out how many colleague who has worked for five years from the | database; taking it as s Sequential Plan 2 , = __sF Python generator: âCalculating P|) the value of 100%X with a | calculator.â a
(b) Sequential Agent (TPTU-SA)
Figure 3: The workflows of the One-step Agent and the Sequential Agent are specifically designed to assess the Task Planning and Tool Usage abilities of LLMs.
# 3.1 Preparations
Before beginning our evaluation, we first outline the preparations. We will give detailed descriptions of the datasets, available tools, and popular large language models. | 2308.03427#14 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 14 | Emotions Anger Anxiety Depression Frustration Jealousy Guilt Fear Factors Facing Self-Opinioned People Blaming, Slandering, and Tattling Bullying, Teasing, Insulting, and Disparaging Silly and Thoughtless Behaviors Driving Situations External Factors Self-Imposed Pressure Personal Growth and Relationships Uncertainty and Unknowns Failure of Important Goal Death of Loved Ones Romantic Loss Chronic Stress Social Isolation Winter Disappointments and Letdowns Unforeseen Obstacles and Accidents Miscommunications and Misunderstanding Rejection and Interpersonal Issues Romantic (Opposite Gender) Romantic (Same Gender) Material Possession Experiential Betrayal and Deception Relationship and Interpersonal Broken Promises and Responsibilities Personal and Moral Social Fears Agoraphobia Fears Injury Fears Dangerous Environments Harmless Animals Intimate Stranger Example Testing Situations If somebody talks back when thereâs no reason. That there is no real reason to oppose. When your brother took money from Momâs purse and you are blamed because youâre the youngest one. If a boy kicks a ball at you on purpose and everybody laughs. You are at a store waiting to be helped, but the clerks are talking to each other and | 2308.03656#14 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 14 | We introduce the concept of evaluating LLMs as agents and present AGENTBENCH, a compre- hensive benchmark to standardize the evaluation. It defines eight distinct environments of 3 types based on real-world scenarios, offering a practical testbed for LLMsâ wide array of capabilities. ⢠We perform a thorough evaluation of 27 different LLMs using AGENTBENCH, uncovering a significant performance gap between leading API-based commercial LLMs and OSS models. We also quantitatively analyze the reasons for failures in existing LLM agents and highlight directions for improvement, such as code training and higher-quality alignment data.
⢠To facilitate the evaluation of LLM-as-Agent, we have introduced an integrated toolkit grounded in the Server-Client architecture, focusing on modular and scalable design principles. This enables easy customization of model assessments for any LLMs using the HTTP protocol. Complemented by its associated datasets and environments, this toolkit is now openly accessible to the broader research community.
# 2 LLM-AS-AGENT: DEFINITION AND PRELIMINARY
Here, we formalize the terms for describing the evaluation of LLMs as agents and the necessary preliminary knowledge for using LLMs in the context of agent evaluation. | 2308.03688#14 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 15 | parameters at a time, so that there are a total of 8 scenarios of opinion dynamics, which are [G1(100, 100, 0.4); benchmark(100, 100, 0.4, 0.6, 0.2 , 0.2, -1); N=300 (300,100, 0.4, 0.6, 0.2 , 0.2, -1); T=300 (100,300, 0.4, 0.6, 0.2 , 0.2, -1);ð =0.8 (100,100, 0.,, 0.6, 0.2 , 0.2, -1); pro_NINL =0.6 (100,100, 0.4, 0.2, 0.6 , 0.2, -1); pro_NIL =0.6 (100,100, 0.4, 0.2, 0.2 , 0.6, -1); ð¥ð¿ð¿ð =1 (100,100, 0.4, 0.6, 0.2 , 0.2, 1) ]. We finally plotted the trend curves of the four outcomes of each network dynamic scenario with the number of iterations, since the curves of T=300 and benchmark are basically the same among the four | 2308.03313#15 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 15 | # 3.1 Preparations
Before beginning our evaluation, we first outline the preparations. We will give detailed descriptions of the datasets, available tools, and popular large language models.
# 3.1.1 Datasets
We first clarify the motivations behind our choice of tools for evaluation. The selection was guided by two primary factors: the number of tools to be evaluated and the specific tools to be included.
Firstly, regarding the number of tools, it is important to state that our proposed evaluation framework is extensible. It can incorporate any number of tools as pluggable components to be managed by the LLM-based AI agents. Secondly, looking at the current work on tool-augmented LLMs, such as T-Bench [16] and ToolBench [17], we see that only a handful of tools are launched and executed in a single scenario. Meanwhile, API-Bank [18], in a single scenario, typically dispatches only one API tool and awaits its response. APIBench [19] and ToolAlpaca [20] do not even execute a tool response. Hence, for the sake of simplicity and focus in our evaluation, we have decided to primarily assess two tools (which can be called multiple times) within a single scenario. | 2308.03427#15 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 15 | one. If a boy kicks a ball at you on purpose and everybody laughs. You are at a store waiting to be helped, but the clerks are talking to each other and ignoring you. Someone makes an obscene gesture towards you about your driving. You do not know what to do when facing a difficult financial situation. You must succeed in completing your project on time. You want to give up on learning a new skill because it feels challenging. You hope time passes by faster during a tedious task. Countless hours of preparation, heart, and soul poured into pursuing your dream. The moment of truth arrives, and the news hits like a tidal waveâexpectations shattered, vision crumbling. In the dimly lit room, a heavy silence settles. Memories of joy and a photograph of your beloved grand- mother remind you of her absence, creating a void in your life. The empty side of the bed is a painful reminder of lost love. The worldâs colors have dulled, mirroring the void in your heart. Longing weighs heavily on your every step. Days blend into a monotonous routine, juggling endless responsibilities and mounting pressure. Sleepless nights become the | 2308.03656#15 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 15 | Here, we formalize the terms for describing the evaluation of LLMs as agents and the necessary preliminary knowledge for using LLMs in the context of agent evaluation.
Definition: Interactive Evaluation of LLM-as-Agent. The interactive evaluation of LLM-as-Agent could be regarded as a Partially Observable Markov Decision Process (S, A, T , R, U, O), which comprises state space S, action space A, transition function T : S Ã A â S, reward assigning function R, task instruction space U, and observation space O. Here, we denote an LLM agent as M.
Chain-of-Thought (CoT) and Other Reasoning Strategies. Since LLM-as-Agent requires LLMsâ strong reasoning ability, CoT (Wei et al., 2022b), which has been considered a de facto strategy in related evaluation together with actions (Yao et al., 2023b), is also adopted in AGENTBENCH. Despite many improved strategies proposed later, such as introducing ensemble (Wang et al., 2023c), reflection (Shinn et al., 2023), and search (Yao et al., 2023a), we evaluate LLMs with the most primitive CoT in AGENTBENCH. Without multiple trials, repeated generations, or complicated strategies, CoT is the easiest, cheapest, and most common way for people to deploy LLM agents. | 2308.03688#15 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 16 | curves of the four outcomes of each network dynamic scenario with the number of iterations, since the curves of T=300 and benchmark are basically the same among the four results, we only selected the results of the 100 iterations to show for a clearer representation, and the trend plots include (A) mean opinion value, which indicates the average opinion value of all agents for each iteration; (B) mean opinion change value , which denotes the absolute value of the difference between the opinion value of the agent at the current iteration and the opinion value at the previous iteration, and finally takes the average of all agents; (C) mean standard deviation, which denotes the standard deviation of all agents in each iteration, see Eq(7) in the method section; and (D) mean number of clusters, which denotes the number of hierarchical clusters of all agents in each iteration, see Eq(10) in the method section. In order to eliminate randomness of the four results, our values at iteration t are the average calculations of 100 repeated simulations of that network dynamics scenario at iteration t. | 2308.03313#16 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 16 | Secondly, we also need to decide which specific tools should be used for evaluation. Consider a real-world scenario where we pose the question: âHow much budget is required to offer a $100 incentive to each employee who has been with the company for over five years?". To answer this, we first need to retrieve the relevant data from a database, typically using SQL, to find the number of eligible employees. Then, we need to perform a mathematical calculation to estimate the total budget. Such scenarios are quite common in daily life where the formulation and resolution of a question often involve SQL and mathematical tools.
Recognizing the importance of these tools, we have chosen to focus our evaluation on SQL and Python generators, which represent the capabilities of database querying and mathematical computation, respectively. To this end, we have prepared 120 question-answer pairs that vary in complexity. These pairs provide a rigorous assessment of the LLM-based AI agents in understanding, generating, and
6
utilizing these essential tools. For further information on these queries and their corresponding demonstrations, please refer to Appendix A.
# 3.1.2 Tools
We have defined a total of 12 available tools for the selection of the LLM-based AI agents for evaluation. They are defined as follows:
⢠SQL generator: Given an input question and a database, create a syntactically correct SQLite query statement.
⢠Python generator: Given an input question and some information, generate a syntactically correct Python code. | 2308.03427#16 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 16 | heavily on your every step. Days blend into a monotonous routine, juggling endless responsibilities and mounting pressure. Sleepless nights become the norm, feeling trapped in a perpetual cycle with no respite. Sitting alone in a dimly lit room, your phone remains silent without any notifications. Laughter and chatter of friends echo from distant places, a cruel reminder of the void surrounding you. Gazing out the frost-covered windowpane, the world appears monochromatic and still. The biting cold isolates you from the vibrant life outside. You miss a popular party because you fall asleep at home. Your friend is in a coma after an accident. A fellow student fails to return your notes when you need them for studying. You are in love with someone who is interested in someone else. Your spouse/partner shared a kiss on the lips with his/her colleague of an opposite sex. Your spouse/partner engaged in oral or penetrative sex with his/her colleague of a same sex. You paid $1150 for a new laptop and shared about it on social media. Now an acquaintance approaches you and says, âNice laptop! I just got the same one. I got a nice deal and paid | 2308.03656#16 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 16 | Typical Types of Finish Reasons. Despite LLMsâ capabilities, we show in AGENTBENCH that even the strongest gpt-4 is not qualified as a practically usable agent. We identify and categorize finish reasons of LLM agents on AGENTBENCH tasks into five typical types:
3
# Technical Report (v0.2)
⢠Context Limit Exceeded (CLE): the length of interaction history exceeds the LLMâs maximum context length (only happened in 2,048-length LLMs text-davinci-002 and 003).
Invalid Format (IF): the agent does not follow the format instruction. ⢠Invalid Action (IA): the agent follows the format instruction, but its selected action is invalid. ⢠Task Limit Exceeded (TLE): the agent does not solve the problem after reaching the predefined
maximum interaction turns or begins to do repeated generations for many turns.
and Complete (task ends normally). While IF and IA are mostly caused by LLMsâ poor instruction following, TLE often indicates a weak multi-turn ability in certain tasks.
# 3 COMPOSITION OF AGENTBENCH: A BRIEF LOOK | 2308.03688#16 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
2308.03313 | 17 | Impacts on the evolution and formation of opinions. Fig.3A presents our findings from the NODEdiff, NODEconv, NODESD, and NODEclus sections. For NODEdiff, we observe that changes in agent opinions exhibit a significant positive correlation solely with the output value of LLM. No significant relationship exists with either the proportion of the three agents or the threshold value of the agents. In addition, the curve representing the change in opinion value influenced by the positive and negative LLM values is symmetric with respect to the y=0 axis of symmetry during iteration (Fig.3B). These findings suggest that LLM can facilitate educational efforts aimed at guiding the positive development of collective opinion. However, the influence relationship described above indicates that negative or toxic public opinions generated by LLMs can also lead to negative development of collective opinion networks. Therefore, intervention and control approaches need to be explored further to ensure the proper evolution of opinion networks. Additionally, Fig.3C shows that the standard deviation of the agents roughly obeys a parabolic distribution with ð¥ð¿ð¿ð=0 as the symmetry axis and the smallest value here. This result suggests that as the LLM output value approaches the initial mean value of the opinion network, the final distribution of opinions becomes less heterogeneous. | 2308.03313#17 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | [
{
"id": "2201.01322"
}
] |
2308.03427 | 17 | ⢠Python generator: Given an input question and some information, generate a syntactically correct Python code.
Weather query tool: Given a location, output the current real-time weather at that location.
Image generator: Given a text description, generate a related image.
⢠Text extractor: Given a link to an image, extract the corresponding text and its position coordinates.
Translator: Given a piece of text, translate it into other languages.
⢠Bing Searcher: Given a piece of text, conduct a search on the Bing browser and return content.
⢠Shell generator: Given an input question and some information, generate a syntactically correct Shell code.
⢠Java generator: Given an input question and some information, generate a syntactically correct Java code.
Wikipedia searcher: Given a piece of text, conduct a search on Wikipedia and return content.
⢠Office software: Given a text description, automatically generate corresponding long docu- ments or spreadsheets or PPTs.
⢠Movie player: Given a movie name, automatically play the corresponding movie resources.
# 3.1.3 LLMs
The LLMs evaluated in this paper are listed in Table 2, elaborated as follows:
⢠GPT series developed by OpenAI boasts a powerful language model with a vast number of parameters, enabling it to tackle intricate problems efficiently. This paper aims to evaluate the performance of ChatGPT, which balances the performance with costs (the number of OpenAI API calls). | 2308.03427#17 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | [
{
"id": "2302.13971"
},
{
"id": "2304.08103"
},
{
"id": "2305.16504"
},
{
"id": "2304.06488"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2108.07258"
},
{
"id": "2303.17491"
},
{
"id": "2305.06223"
},
{
"id": "2305.17126"
},
{
"id": "2103.10385"
},
{
"id": "2305.16938"
},
{
"id": "2305.13246"
},
{
"id": "2305.05662"
},
{
"id": "2212.06817"
},
{
"id": "2304.04370"
},
{
"id": "2304.08244"
},
{
"id": "2303.16434"
},
{
"id": "2310.09611"
},
{
"id": "2303.10089"
},
{
"id": "2304.11015"
},
{
"id": "2303.03378"
},
{
"id": "2303.08128"
},
{
"id": "2303.14725"
},
{
"id": "2212.08073"
},
{
"id": "2305.14323"
},
{
"id": "2305.11738"
},
{
"id": "2305.14318"
},
{
"id": "2110.14168"
},
{
"id": "2305.08144"
},
{
"id": "2303.11381"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2303.18223"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2307.08674"
},
{
"id": "2304.09433"
},
{
"id": "2205.06175"
},
{
"id": "2305.19308"
},
{
"id": "2210.02406"
},
{
"id": "2304.13712"
},
{
"id": "2306.05301"
},
{
"id": "2305.14257"
},
{
"id": "2303.09014"
},
{
"id": "2306.07209"
},
{
"id": "2305.06849"
},
{
"id": "2304.08177"
},
{
"id": "2305.11554"
},
{
"id": "2205.12255"
},
{
"id": "2303.00905"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2210.02414"
},
{
"id": "2304.03893"
},
{
"id": "2106.09685"
},
{
"id": "2307.06135"
},
{
"id": "2207.05608"
},
{
"id": "2304.09842"
},
{
"id": "1809.09600"
},
{
"id": "2109.01652"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2101.00190"
},
{
"id": "2305.11854"
}
] |
2308.03656 | 17 | and shared about it on social media. Now an acquaintance approaches you and says, âNice laptop! I just got the same one. I got a nice deal and paid $650 for mine.â An acquaintance approaches you and says, âI just went on a vacation to Patagonia in South America. I got a nice deal and paid $650 for it.â You kissed a woman other than your partner. You didnât support friends enough. You cannot keep your promises to your children. You crossed the road when the traffic signal was red. Your palms grow clammy as you approach the podium, with all eyes fixed upon you, ready to speak in public. After jumping out of the car, you start to have a severe panic attack, you become clammy, you are in a knot, and you feel tense all over. You glance down and notice open wounds on your hands, oozing blood and causing a sharp, stinging pain. You are walking alone in an isolated but familiar area when a menacing stranger suddenly jumps out of the bushes to attack you. You see a swarm of bats swooping through the night sky, flapping ominously and casting eerie shadows. You arrive home earlier than expected from your date. Youâre | 2308.03656#17 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | [
{
"id": "2303.13648"
},
{
"id": "2310.04450"
},
{
"id": "2304.07333"
},
{
"id": "2306.03917"
},
{
"id": "2306.04308"
},
{
"id": "2307.11760"
},
{
"id": "2307.13779"
},
{
"id": "2312.11111"
},
{
"id": "2310.17976"
},
{
"id": "2307.00184"
},
{
"id": "2301.08745"
},
{
"id": "2204.12000"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2212.10529"
},
{
"id": "2309.05076"
},
{
"id": "2305.19926"
},
{
"id": "2206.07550"
},
{
"id": "2304.11111"
},
{
"id": "2311.04915"
},
{
"id": "2310.01386"
},
{
"id": "2305.02547"
},
{
"id": "2306.01248"
}
] |
2308.03688 | 17 | # 3 COMPOSITION OF AGENTBENCH: A BRIEF LOOK
In this section, we briefly introduce the datasets and environments that compose the AGENTBENCH. Compared to previous agent evaluation benchmarks (Côté et al., 2019; Fan et al., 2022), AGENT- BENCH concentrates on the practical evaluation of LLMs via Chain-of-Thought (CoT) (Wei et al., 2022b; Yao et al., 2023b) prompting, including code-grounded, game-grounded, and web-grounded scenarios. They pinpoint promising directions of LLMsâ applications with autonomous mission com- pletion, and their versatility avoids task-specific modelsâ (e.g., code-specific LLMs) overperformance on AGENTBENCH. Due to page limit, for details of construction, evaluation, and prompt examples, please refer to Appendix.
3.1 CODE-GROUNDED ENVIRONMENTS
Since LLMs can generate high quality codes (Chen et al., 2021), a very practical mission for LLM agents is to assist human interaction with computer interfaces. Here, we introduce three three environments depending on coding and reasoning abilities as representatives in AGENTBENCH. | 2308.03688#17 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | [
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2203.15556"
},
{
"id": "2303.17491"
},
{
"id": "2211.05100"
},
{
"id": "2105.13231"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2304.11477"
},
{
"id": "2108.07732"
},
{
"id": "2211.09110"
},
{
"id": "2307.09288"
},
{
"id": "2302.01560"
},
{
"id": "2110.14168"
},
{
"id": "2308.12950"
},
{
"id": "2306.14898"
},
{
"id": "2210.02414"
},
{
"id": "2204.01691"
},
{
"id": "2303.11366"
},
{
"id": "2305.14314"
},
{
"id": "2105.09938"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.