doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2307.03109
139
29 Sunhao Dai, Ninglu Shao, Haiyuan Zhao, Weijie Yu, Zihua Si, Chen Xu, Zhongxiang Sun, Xiao Zhang, and Jun Xu. 2023. Uncovering ChatGPT’s Capabilities in Recommender Systems. arXiv preprint arXiv:2305.02182 (2023). 2023. Uncovering ChatGPT’s Capabilities in Recommender Systems. arXiv preprint arXiv:2305.02182 (2023). [30] Wei Dai, Jionghao Lin, Flora Jin, Tongguang Li, Yi-Shan Tsai, Dragan Gasevic, and Guanliang Chen. 2023. Can large language models provide feedback to students? a case study on chatgpt. (2023). [31] Xuan-Quy Dao and Ngoc-Bich Le. 2023. Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination. arXiv preprint arXiv:2306.06331 (2023). [32] Joost CF de Winter. 2023. Can ChatGPT pass high school exams on English language comprehension. Researchgate. Preprint (2023).
2307.03109#139
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
140
[32] Joost CF de Winter. 2023. Can ChatGPT pass high school exams on English language comprehension. Researchgate. Preprint (2023). [33] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. Ieee, 248–255. [34] Aniket Deroy, Kripabandhu Ghosh, and Saptarshi Ghosh. 2023. How Ready are Pre-trained Abstractive Models and LLMs for Legal Case Judgement Summarization? arXiv preprint arXiv:2306.01248 (2023). [35] Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, and Karthik Narasimhan. 2023. Toxicity in chatgpt: Analyzing persona-assigned language models. arXiv preprint arXiv:2304.05335 (2023). [36] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
2307.03109#140
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
141
[37] Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, and Rahul Gupta. 2021. Bold: Dataset and metrics for measuring biases in open-ended language generation. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 862–872. [38] Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Alpacafarm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387 (2023). [39] Dat Duong and Benjamin D Solomon. 2023. Analysis of large-language model versus human performance for genetics questions. European Journal of Human Genetics (2023), 1–3. [40] Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Jiliang Tang, and Qing Li. 2023. Recom- mender Systems in the Era of Large Language Models (LLMs). arXiv:2307.02046 [cs.IR]
2307.03109#141
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
142
[41] Arsene Fansi Tchango, Rishab Goel, Zhi Wen, Julien Martel, and Joumana Ghosn. 2022. Ddxplus: A new dataset for automatic medical diagnosis. Advances in Neural Information Processing Systems 35 (2022), 31306–31318. [42] Emilio Ferrara. 2023. Should chatgpt be biased? challenges and risks of bias in large language models. arXiv preprint arXiv:2304.03738 (2023). [43] Luciano Floridi and Massimo Chiriatti. 2020. GPT-3: Its nature, scope, limits, and consequences. Minds and Machines 30 (2020), 681–694. [44] Michael C Frank. 2023. Baby steps in evaluating the capacities of large language models. Nature Reviews Psychology (2023), 1–2. [45] Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Alexis Chevalier, and Julius Berner. 2023. Mathematical capabilities of chatgpt. arXiv preprint arXiv:2301.13867 (2023).
2307.03109#142
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
143
[46] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, et al. 2023. MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models. arXiv preprint arXiv:2306.13394 (2023). [47] Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, and Tushar Khot. 2023. Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models’ Reasoning Performance. arXiv preprint arXiv:2305.17306 (2023). J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:35 111:35 111:36 111:36 # Chang et al. Chang et al. [48] Tadayoshi Fushiki. 2011. Estimation of prediction error by using K-fold cross-validation. Statistics and Computing 21 (2011), 137–146. [49] Stephen I Gallant et al. 1990. Perceptron-based learning algorithms. IEEE Transactions on neural networks 1, 2 (1990), 179–191.
2307.03109#143
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
144
[49] Stephen I Gallant et al. 1990. Perceptron-based learning algorithms. IEEE Transactions on neural networks 1, 2 (1990), 179–191. [50] Irena Gao, Gabriel Ilharco, Scott Lundberg, and Marco Tulio Ribeiro. 2022. Adaptive Testing of Computer Vision Models. arXiv preprint arXiv:2212.02774 (2022). [51] Jianfeng Gao and Chin-Yew Lin. 2004. Introduction to the special issue on statistical language modeling. , 87–93 pages. [52] Tianyu Gao, Adam Fisch, and Danqi Chen. 2020. Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723 (2020). [53] Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. 2020. RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2020. 3356–3369. [54] Yonatan Geifman and Ran El-Yaniv. 2017. Selective classification for deep neural networks. Advances in neural information processing systems 30 (2017).
2307.03109#144
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
145
[54] Yonatan Geifman and Ran El-Yaniv. 2017. Selective classification for deep neural networks. Advances in neural information processing systems 30 (2017). [55] Zorik Gekhman, Jonathan Herzig, Roee Aharoni, Chen Elkind, and Idan Szpektor. 2023. Trueteacher: Learning factual consistency evaluation with large language models. arXiv preprint arXiv:2305.11171 (2023). [56] Gaël Gendron, Qiming Bao, Michael Witbrock, and Gillian Dobbie. 2023. Large Language Models Are Not Abstract Reasoners. arXiv preprint arXiv:2305.19555 (2023).
2307.03109#145
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
146
[57] Aidan Gilson, Conrad W Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, David Chartash, et al. 2023. How does CHATGPT perform on the United States Medical Licensing Examination? the implications of large language models for medical education and knowledge assessment. JMIR Medical Education 9, 1 (2023), e45312. [58] Jesse Graham, Jonathan Haidt, Sena Koleva, Matt Motyl, Ravi Iyer, Sean P Wojcik, and Peter H Ditto. 2013. Moral foundations theory: The pragmatic validity of moral pluralism. In Advances in experimental social psychology. Vol. 47. Elsevier, 55–130. [59] Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, et al. 2023. Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation. arXiv preprint arXiv:2306.05783 (2023). [60] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In International conference on machine learning. PMLR, 1321–1330.
2307.03109#146
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
147
[61] Taicheng Guo, Kehan Guo, Zhengwen Liang, Zhichun Guo, Nitesh V Chawla, Olaf Wiest, Xiangliang Zhang, et al. 2023. What indeed can GPT models do in chemistry? A comprehensive benchmark on eight tasks. arXiv preprint arXiv:2305.18365 (2023). [62] Thilo Hagendorff and Sarah Fabi. 2023. Human-Like Intuitive Behavior and Reasoning Biases Emerged in Language Models – and Disappeared in GPT-4. arXiv:2306.07622 [cs.CL] [63] Alaleh Hamidi and Kirk Roberts. 2023. Evaluation of AI Chatbots for Patient-Specific EHR Questions. arXiv preprint arXiv:2306.02549 (2023). [64] Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Advances in neural information processing systems 29 (2016). [65] Jochen Hartmann, Jasper Schwenzow, and Maximilian Witte. 2023. The political ideology of conversational AI: Converging evidence on ChatGPT’s pro-environmental, left-libertarian orientation. arXiv preprint arXiv:2301.01768 (2023).
2307.03109#147
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
148
[66] Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, et al. 2023. Can Large Language Models Understand Real-World Complex Instructions? arXiv preprint arXiv:2309.09150 (2023). [67] Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, and Juha Sorva. 2023. Exploring the Responses of Large Language Models to Beginner Programmers’ Help Requests. arXiv preprint arXiv:2306.05715 (2023). [68] Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. 2021. Measuring coding challenge competence with apps. arXiv preprint arXiv:2105.09938 (2021).
2307.03109#148
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
149
[69] Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. 2020. Aligning ai with shared human values. arXiv preprint arXiv:2008.02275 (2020). [70] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 (2020). [71] Dan Hendrycks, Collin Burns, Anya Chen, and Spencer Ball. 2021. Cuad: An expert-annotated nlp dataset for legal contract review. arXiv preprint arXiv:2103.06268 (2021). J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. # A Survey on Evaluation of Large Language Models [72] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 (2021).
2307.03109#149
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
150
[73] Jason Holmes, Zhengliang Liu, Lian Zhang, Yuzhen Ding, Terence T Sio, Lisa A McGee, Jonathan B Ashman, Xiang Li, Tianming Liu, Jiajian Shen, et al. 2023. Evaluating large language models on a highly-specialized topic, radiation oncology physics. arXiv preprint arXiv:2304.01938 (2023). [74] Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. TRUE: Re-evaluating factual consistency evaluation. arXiv preprint arXiv:2204.04991 (2022). [75] Zhaoyi Joey Hou, Li Zhang, and Chris Callison-Burch. 2023. Choice-75: A Dataset on Decision Branching in Script Learning. arXiv preprint arXiv:2309.11737 (2023).
2307.03109#150
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
151
[76] Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, and Michael R. Lyu. 2023. Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench. arXiv preprint arXiv:2308.03656 (2023). [77] Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, et al. 2023. Language is not all you need: Aligning perception with language models. arXiv preprint arXiv:2302.14045 (2023). [78] Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. 2023. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322 (2023).
2307.03109#151
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
152
[79] Yue Huang, Qihui Zhang, Philip S. Y, and Lichao Sun. 2023. TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models. arXiv:2306.11507 [cs.CL] [80] HuggingFace. 2023. Open-source Large Language Models Leaderboard. https://huggingface.co/spaces/ HuggingFaceH4/open_llm_leaderboard. [81] Israt Jahan, Md Tahmid Rahman Laskar, Chun Peng, and Jimmy Huang. 2023. Evaluation of ChatGPT on Biomedical Tasks: A Zero-Shot Comparison with Fine-Tuned Generative Transformers. arXiv preprint arXiv:2306.04504 (2023). [82] Neel Jain, Khalid Saifullah, Yuxin Wen, John Kirchenbauer, Manli Shu, Aniruddha Saha, Micah Goldblum, Jonas Geiping, and Tom Goldstein. 2023. Bring Your Own Data! Self-Supervised Evaluation for Large Language Models. arXiv preprint arXiv:2306.13651 (2023).
2307.03109#152
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
153
[83] Malin Jansson, Stefan Hrastinski, Stefan Stenbom, and Fredrik Enoksson. 2021. Online question and answer sessions: How students support their own and other students’ processes of inquiry in a text-based learning environment. The Internet and Higher Education 51 (2021), 100817. [84] Sophie Jentzsch and Kristian Kersting. 2023. ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models. arXiv preprint arXiv:2306.04563 (2023). [85] Jiaming Ji, Mickel Liu, Juntao Dai, Xuehai Pan, Chi Zhang, Ce Bian, Ruiyang Sun, Yizhou Wang, and Yaodong Yang. 2023. Beavertails: Towards improved safety alignment of llm via a human-preference dataset. arXiv preprint arXiv:2307.04657 (2023). [86] Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Structgpt: A general framework for large language model to reason over structured data. arXiv preprint arXiv:2305.09645 (2023).
2307.03109#153
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
154
[87] Douglas Johnson, Rachel Goodman, J Patrinely, Cosby Stone, Eli Zimmerman, Rebecca Donald, Sam Chang, Sean Berkowitz, Avni Finn, Eiman Jahangir, et al. 2023. Assessing the accuracy and reliability of AI-generated medical responses: an evaluation of the Chat-GPT model. (2023). [88] Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Vancouver, Canada.
2307.03109#154
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
155
[89] Saurav Kadavath, Tom Conerly, Amanda Askell, T. J. Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zachary Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, John Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom B. Brown, Jack Clark, Nicholas Joseph, Benjamin Mann, Sam McCandlish, Christopher Olah, and Jared Kaplan. 2022. Language Models (Mostly) Know What They Know. ArXiv abs/2207.05221 (2022).
2307.03109#155
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
156
[90] Ehud Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, et al. 2022. MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning. arXiv preprint arXiv:2205.00445 (2022). [91] Enkelejda Kasneci, Kathrin Seßler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, et al. 2023. ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences 103 (2023), 102274. J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:37 111:38 111:38 # Chang et al. Chang et al. [92] Jean Khalfa. 1994. What is intelligence? (1994). [93] Yousuf A Khan, Clarisse Hokia, Jennifer Xu, and Ben Ehlert. 2023. covLLM: Large Language Models for COVID-19
2307.03109#156
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
157
Biomedical Literature. arXiv preprint arXiv:2306.04926 (2023). [94] Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, et al. 2021. Dynabench: Rethinking benchmarking in NLP. arXiv preprint arXiv:2104.14337 (2021). [95] Ron Kohavi et al. 1995. A study of cross-validation and bootstrap for accuracy estimation and model selection. In Ijcai, Vol. 14. Montreal, Canada, 1137–1145. [96] Stefan Kombrink, Tomas Mikolov, Martin Karafiát, and Lukás Burget. 2011. Recurrent Neural Network Based Language Modeling in Meeting Recognition.. In Interspeech, Vol. 11. 2877–2880.
2307.03109#157
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
158
[97] Tiffany H Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, et al. 2023. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS digital health 2, 2 (2023), e0000198. [98] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: a Benchmark for Question Answering Research. Transactions of the Association of Computational Linguistics (2019). [99] Adi Lahat, Eyal Shachar, Benjamin Avidan, Zina Shatz, Benjamin S Glicksberg, and Eyal Klang. 2023. Evaluating the use of large language model in identifying top research questions in gastroenterology. Scientific reports 13, 1 (2023), 4164.
2307.03109#158
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
159
[100] Viet Dac Lai, Nghia Trung Ngo, Amir Pouran Ben Veyseh, Hieu Man, Franck Dernoncourt, Trung Bui, and Thien Huu Nguyen. 2023. ChatGPT Beyond English: Towards a Comprehensive Evaluation of Large Language Models in Multilingual Learning. arXiv preprint arXiv:2304.05613 (2023). [101] Pier Luca Lanzi and Daniele Loiacono. 2023. Chatgpt and other large language models as evolutionary engines for online interactive collaborative game design. arXiv preprint arXiv:2303.02155 (2023). [102] Md Tahmid Rahman Laskar, M Saiful Bari, Mizanur Rahman, Md Amran Hossen Bhuiyan, Shafiq Joty, and Jimmy Xi- angji Huang. 2023. A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets. arXiv preprint arXiv:2305.18486 (2023). [103] Van-Hoang Le and Hongyu Zhang. 2023. An Evaluation of Log Parsing with ChatGPT. arXiv preprint arXiv:2306.01590 (2023).
2307.03109#159
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
160
[104] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature 521, 7553 (2015), 436–444. [105] Noah Lee, Na Min An, and James Thorne. 2023. Can Large Language Models Infer and Disagree Like Humans? arXiv preprint arXiv:2305.13788 (2023). [106] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461 (2019). [107] Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. 2023. Seed-bench: Benchmarking multimodal llms with generative comprehension. arXiv preprint arXiv:2307.16125 (2023).
2307.03109#160
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
161
[108] Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. 2023. CMMLU: Measuring massive multitask language understanding in Chinese. arXiv preprint arXiv:2306.09212 (2023). [109] Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. 2023. API-Bank: A Benchmark for Tool-Augmented LLMs. arXiv:2304.08244 [cs.CL] [110] Ruyu Li, Wenhao Deng, Yu Cheng, Zheng Yuan, Jiaqi Zhang, and Fajie Yuan. 2023. Exploring the Upper Limits of Text- Based Collaborative Filtering Using Large Language Models: Discoveries and Insights. arXiv preprint arXiv:2305.11700 (2023). [111] Xinzhe Li, Ming Liu, Shang Gao, and Wray Buntine. 2023. A Survey on Out-of-Distribution Evaluation of Neural NLP Models. arXiv:2306.15261 [cs.CL]
2307.03109#161
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
162
[112] Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. AlpacaEval: An Automatic Evaluator of Instruction-following Models. https://github.com/tatsu- lab/alpaca_eval. [113] Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355 (2023). [114] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022).
2307.03109#162
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
163
[115] Tian Liang, Zhiwei He, Jen-tes Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, and Xing Wang. 2023. Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models. J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. # A Survey on Evaluation of Large Language Models arXiv preprint arXiv:2310.20499 (2023). [116] Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, et al. 2023. UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation. arXiv preprint arXiv:2311.15296 (2023). [117] Valentin Liévin, Christoffer Egeberg Hother, and Ole Winther. 2022. Can large language models reason about medical questions? arXiv preprint arXiv:2207.08143 (2022).
2307.03109#163
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
164
[118] Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out. Association for Computational Linguistics, Barcelona, Spain, 74–81. https://aclanthology.org/W04-1013 [119] Stephanie Lin, Jacob Hilton, and Owain Evans. 2021. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958 (2021). [120] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13. Springer, 740–755. [121] Yen-Ting Lin and Yun-Nung Chen. 2023. LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open- Domain Conversations with Large Language Models. arXiv preprint arXiv:2305.13711 (2023).
2307.03109#164
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
165
[122] Chuang Liu, Renren Jin, Yuqi Ren, Linhao Yu, Tianyu Dong, Xiaohan Peng, Shuting Zhang, Jianxiang Peng, Peiyi Zhang, Qingqing Lyu, Xiaowen Su, Qun Liu, and Deyi Xiong. 2023. M3KE: A Massive Multi-Level Multi-Subject Knowledge Evaluation Benchmark for Chinese Large Language Models. arXiv:2305.10263 [cs.CL] [123] Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. 2023. Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning. arXiv:2306.14565 [cs.CV] [124] Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, and Yue Zhang. 2023. Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4. arXiv:2304.03439 [cs.CL]
2307.03109#165
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
166
[125] Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. 2023. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. arXiv preprint arXiv:2305.01210 (2023). [126] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, and Dahua Lin. 2023. MMBench: Is Your Multi-modal Model an All-around Player? arXiv:2307.06281 [cs.CV] [127] Yiheng Liu, Tianle Han, Siyuan Ma, Jiayue Zhang, Yuanyuan Yang, Jiaming Tian, Hao He, Antong Li, Mengshen He, Zhengliang Liu, et al. 2023. Summary of chatgpt/gpt-4 research and perspective towards the future of large language models. arXiv preprint arXiv:2304.01852 (2023).
2307.03109#166
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
167
[128] LMSYS. 2023. Chatbot Arena: Benchmarking LLMs in the Wild with Elo Ratings. https://lmsys.org. [129] Alejandro Lopez-Lira and Yuehua Tang. 2023. Can chatgpt forecast stock price movements? Return predictability and large language models. arXiv preprint arXiv:2304.07619 (2023). [130] Chenyang Lyu, Jitao Xu, and Longyue Wang. 2023. New trends in machine translation using large language models: Case examples with chatgpt. arXiv preprint arXiv:2305.01181 (2023). [131] Qing Lyu, Josh Tan, Mike E Zapadka, Janardhana Ponnatapuram, Chuang Niu, Ge Wang, and Christopher T Whitlow. 2023. Translating radiology reports into plain language using chatgpt and gpt-4 with prompt learning: Promising results, limitations, and potential. arXiv preprint arXiv:2303.09038 (2023).
2307.03109#167
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
168
[132] Zhiyi Ma, Kawin Ethayarajh, Tristan Thrush, Somya Jain, Ledell Wu, Robin Jia, Christopher Potts, Adina Williams, and Douwe Kiela. 2021. Dynaboard: An evaluation-as-a-service platform for holistic next-generation benchmarking. Advances in Neural Information Processing Systems 34 (2021), 10351–10367. [133] Potsawee Manakul, Adian Liusie, and Mark JF Gales. 2023. Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896 (2023). [134] Potsawee Manakul, Adian Liusie, and Mark J. F. Gales. 2023. MQAG: Multiple-choice Question Answering and Generation for Assessing Information Consistency in Summarization. arXiv:2301.12307 [cs.CL] [135] Katerina Margatina, Shuai Wang, Yogarshi Vyas, Neha Anna John, Yassine Benajiba, and Miguel Ballesteros. 2023. Dynamic benchmarking of masked language models on temporal concept drift with multiple views. arXiv preprint arXiv:2302.12297 (2023).
2307.03109#168
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
169
[136] John McCarthy. 2007. What is artificial intelligence. (2007). [137] Microsoft. 2023. Bing Chat. https://www.bing.com/new (2023). [138] Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2023. FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation. arXiv preprint arXiv:2305.14251 (2023). [139] John J Nay, David Karamardian, Sarah B Lawsky, Wenting Tao, Meghana Bhat, Raghav Jain, Aaron Travis Lee, Jonathan H Choi, and Jungo Kasai. 2023. Large Language Models as Tax Attorneys: A Case Study in Legal Capabilities Emergence. arXiv preprint arXiv:2306.07075 (2023). J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:39 111:39 111:40 111:40 Chang et al.
2307.03109#169
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
170
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:39 111:39 111:40 111:40 Chang et al. [140] Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019. Adversarial NLI: A new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599 (2019). [141] Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022. Codegen: An open large language model for code with multi-turn program synthesis. arXiv preprint arXiv:2203.13474 (2022). [142] Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. arXiv preprint arXiv:1707.06875 (2017).
2307.03109#170
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
171
[143] Namkee Oh, Gyu-Seong Choi, and Woo Yong Lee. 2023. ChatGPT goes to the operating room: evaluating GPT-4 performance and its potential in surgical education and training in the era of large language models. Annals of Surgical Treatment and Research 104, 5 (2023), 269. [144] Andrew M Olney. 2023. Generating multiple choice questions from a textbook: Llms match human performance on most metrics. In AIED Workshops. [145] OpenAI. 2023. https://chat.openai.com.chat. [146] OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL] [147] Graziella Orrù, Andrea Piarulli, Ciro Conversano, and Angelo Gemignani. 2023. Human-like problem-solving abilities in large language models using ChatGPT. Frontiers in Artificial Intelligence 6 (2023).
2307.03109#171
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
172
in large language models using ChatGPT. Frontiers in Artificial Intelligence 6 (2023). [148] Simon Ott, Konstantin Hebenstreit, Valentin Liévin, Christoffer Egeberg Hother, Milad Moradi, Maximilian Mayrhauser, Robert Praas, Ole Winther, and Matthias Samwald. 2023. ThoughtSource: A central hub for large language model reasoning data. arXiv preprint arXiv:2301.11596 (2023). [149] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730–27744. [150] Vishal Pallagani, Bharath Muppasani, Keerthiram Murugesan, Francesca Rossi, Biplav Srivastava, Lior Horesh, Francesco Fabiano, and Andrea Loreggia. 2023. Understanding the Capabilities of Large Language Models for Automated Planning. arXiv preprint arXiv:2305.16151 (2023).
2307.03109#172
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
173
[151] Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, and Xindong Wu. 2023. Unifying Large Language Models and Knowledge Graphs: A Roadmap. arXiv:2306.08302 [cs.CL] [152] Aaron Parisi, Yao Zhao, and Noah Fiedel. 2022. Talm: Tool augmented language models. arXiv preprint arXiv:2205.12255 (2022). [153] Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel Bowman. 2022. BBQ: A hand-built bias benchmark for question answering. In Findings of the Association for Computational Linguistics: ACL 2022. 2086–2105. [154] Alejandro Peña, Aythami Morales, Julian Fierrez, Ignacio Serna, Javier Ortega-Garcia, Iñigo Puente, Jorge Cordova, and Gonzalo Cordova. 2023. Leveraging Large Language Models for Topic Classification in the Domain of Public Affairs. arXiv preprint arXiv:2306.02864 (2023).
2307.03109#173
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
174
[155] Kaiping Peng, Richard E Nisbett, and Nancy YC Wong. 1997. Validity problems comparing values across cultures and possible solutions. Psychological methods 2, 4 (1997), 329. [156] Pouya Pezeshkpour. 2023. Measuring and Modifying Factual Knowledge in Large Language Models. arXiv preprint arXiv:2306.06264 (2023). [157] Jason Phang, Angelica Chen, William Huang, and Samuel R Bowman. 2021. Adversarially constructed evaluation sets are more challenging, but may not be fair. arXiv preprint arXiv:2111.08181 (2021). [158] Dongqi Pu and Vera Demberg. 2023. ChatGPT vs Human-authored Text: Insights into Controllable Text Summarization and Sentence Style Transfer. arXiv:2306.07799 [cs.CL] [159] Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. 2023. Is ChatGPT a general-purpose natural language processing task solver? arXiv preprint arXiv:2302.06476 (2023).
2307.03109#174
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
175
[160] Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, and Maosong Sun. 2023. Tool Learning with Foundation Models. arXiv:2304.08354 [cs.CL] [161] Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian,
2307.03109#175
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
176
Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2023. ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs. arXiv:2307.16789 [cs.AI] [162] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by
2307.03109#176
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
177
generative pre-training. (2018). [163] Vipula Rawte, Amit Sheth, and Amitava Das. 2023. A Survey of Hallucination in Large Foundation Models. arXiv preprint arXiv:2309.05922 (2023). J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. # A Survey on Evaluation of Large Language Models [164] Marco Tulio Ribeiro and Scott Lundberg. 2022. Adaptive testing and debugging of nlp models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 3253–3267. [165] Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. arXiv preprint arXiv:2005.04118 (2020). [166] Nicholas Riccardi and Rutvik H Desai. 2023. The Two Word Test: A Semantic Benchmark for Large Language Models. arXiv preprint arXiv:2306.04610 (2023).
2307.03109#177
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
178
[167] Jérôme Rutinowski, Sven Franke, Jan Endendyk, Ina Dormuth, and Markus Pauly. 2023. The Self-Perception and Political Biases of ChatGPT. arXiv preprint arXiv:2304.07333 (2023). [168] Mustafa Safdari, Greg Serapio-García, Clément Crepy, Stephen Fitz, Peter Romero, Luning Sun, Marwa Abdulhai, Aleksandra Faust, and Maja Matarić. 2023. Personality Traits in Large Language Models. arXiv preprint arXiv:2307.00184 (2023). [169] Jamil S Samaan, Yee Hui Yeo, Nithya Rajeev, Lauren Hawley, Stuart Abel, Wee Han Ng, Nitin Srinivasan, Justin Park, Miguel Burch, Rabindra Watson, et al. 2023. Assessing the accuracy of responses by the language model ChatGPT to questions regarding bariatric surgery. Obesity Surgery (2023), 1–7.
2307.03109#178
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
179
[170] Abulhair Saparov, Richard Yuanzhe Pang, Vishakh Padmakumar, Nitish Joshi, Seyed Mehran Kazemi, Najoung Kim, and He He. 2023. Testing the General Deductive Reasoning Capacity of Large Language Models Using OOD Examples. arXiv preprint arXiv:2305.15269 (2023). [171] Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, and Aran Komatsuzaki. 2023. ARB: Advanced Reasoning Benchmark for Large Language Models. arXiv:2307.13692 [cs.CL] [172] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761 (2023).
2307.03109#179
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
180
[173] Prabin Sharma, Kisan Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, and Salik Ram Khanal. 2023. Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education. arXiv preprint arXiv:2307.00112 (2023). [174] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580 (2023). [175] Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2021. Societal Biases in Language Generation: Progress and Challenges. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 4275–4293. [176] Gabriel Simmons. 2022. Moral mimicry: Large language models produce moral rationalizations tailored to political identity. arXiv preprint arXiv:2209.12106 (2022).
2307.03109#180
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
181
identity. arXiv preprint arXiv:2209.12106 (2022). [177] Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. 2022. Large Language Models Encode Clinical Knowledge. arXiv preprint arXiv:2212.13138 (2022). [178] Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. 2023. Large language models encode clinical knowledge. Nature 620, 7972 (2023), 172–180.
2307.03109#181
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
182
[179] Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. 2022. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990 (2022). [180] Xiaoyang Song, Akshat Gupta, Kiyan Mohebbizadeh, Shujie Hu, and Anant Singh. 2023. Have Large Language Models Developed a Personality?: Applicability of Self-Assessment Tests in Measuring Personality in LLMs. arXiv preprint arXiv:2305.14693 (2023). [181] Giriprasad Sridhara, Sourav Mazumdar, et al. 2023. ChatGPT: A Study on its Utility for Ubiquitous Software Engineering Tasks. arXiv preprint arXiv:2305.16837 (2023).
2307.03109#182
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
183
[182] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615 (2022). [183] Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent. arXiv preprint arXiv:2304.09542 (2023).
2307.03109#183
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
184
[184] Zhengwei Tao, Zhi Jin, Xiaoying Bai, Haiyan Zhao, Yanlin Feng, Jia Li, and Wenpeng Hu. 2023. EvEval: A Comprehensive Evaluation of Event Semantics for Large Language Models. arXiv preprint arXiv:2305.15268 (2023). [185] Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. Beir: A heterogenous benchmark for zero-shot evaluation of information retrieval models. arXiv preprint arXiv:2104.08663 (2021). J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:41 111:42 Chang et al. [186] Arun James Thirunavukarasu, Refaat Hassan, Shathar Mahmood, Rohan Sanghera, Kara Barzangi, Mohanned El Mukashfi, and Sachin Shah. 2023. Trialling a large language model (ChatGPT) in general practice with the Applied Knowledge Test: observational study demonstrating opportunities and limitations in primary care. JMIR Medical Education 9, 1 (2023), e46599.
2307.03109#184
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
185
[187] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239 (2022). [188] Tristan Thrush, Kushal Tirumala, Anmol Gupta, Max Bartolo, Pedro Rodriguez, Tariq Kane, William Gaviria Rojas, Peter Mattson, Adina Williams, and Douwe Kiela. 2022. Dynatask: A framework for creating dynamic AI benchmark tasks. arXiv preprint arXiv:2204.01906 (2022). [189] Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher D Manning. 2023. Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. arXiv preprint arXiv:2305.14975 (2023).
2307.03109#185
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
186
[190] Yuchi Tian, Kexin Pei, Suman Jana, and Baishakhi Ray. 2018. Deeptest: Automated testing of deep-neural-network- driven autonomous cars. In Proceedings of the 40th international conference on software engineering. 303–314. [191] ToolBench. 2023. Open-source tools learning benchmarks. https://github.com/sambanova/toolbench. [192] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023). [193] Alan M Turing. 2009. Computing machinery and intelligence. Springer. [194] Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, and Subbarao Kambhampati. 2023. On the Planning Abilities of Large Language Models–A Critical Investigation. arXiv preprint arXiv:2305.15771 (2023).
2307.03109#186
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
187
Abilities of Large Language Models–A Critical Investigation. arXiv preprint arXiv:2305.15771 (2023). [195] Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. 2022. Large Language Models Still Can’t Plan (A Benchmark for LLMs on Planning and Reasoning about Change). arXiv preprint arXiv:2206.10498 (2022). [196] Chris Van Der Lee, Albert Gatt, Emiel Van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation. 355–368. [197] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems 30 (2017).
2307.03109#187
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
188
[198] Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, and Thang Luong. 2023. FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation. arXiv:2310.03214 [cs.CL] [199] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems 32 (2019).
2307.03109#188
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
189
[200] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461 (2018). [201] Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, and Bo Li. 2023. DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models. arXiv:2306.11698 [cs.CL]
2307.03109#189
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
190
[202] Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 billion parameter autoregressive language model. [203] Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, and Bo Li. 2021. Adversarial glue: A multi-task benchmark for robustness evaluation of language models. arXiv preprint arXiv:2111.02840 (2021). [204] Cunxiang Wang, Sirui Cheng, Zhikun Xu, Bowen Ding, Yidong Wang, and Yue Zhang. 2023. Evaluating open question answering evaluation. arXiv preprint arXiv:2305.12421 (2023). [205] Hongru Wang, Rui Wang, Fei Mi, Zezhong Wang, Ruifeng Xu, and Kam-Fai Wong. 2023. Chain-of-thought prompting for responding to in-depth dialogue questions with LLM. arXiv:2305.11792 [cs.CL]
2307.03109#190
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
191
[206] Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, et al. 2023. On the robustness of chatgpt: An adversarial and out-of-distribution perspective. In ICLR workshop on Trustworthy and Reliable Large-Scale Machine Learning Models. [207] Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip Yu. 2022. Generalizing to unseen domains: A survey on domain generalization. IEEE Transactions on Knowledge and Data Engineering (2022). J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. # A Survey on Evaluation of Large Language Models [208] Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, and Zhaopeng Tu. 2023. Document- level machine translation with large language models. arXiv preprint arXiv:2304.02210 (2023).
2307.03109#191
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
192
[209] Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926 (2023). [210] Rose E Wang and Dorottya Demszky. 2023. Is ChatGPT a Good Teacher Coach? Measuring Zero-Shot Performance For Scoring and Providing Actionable Insights on Classroom Instruction. arXiv preprint arXiv:2306.03090 (2023). [211] Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, et al. 2023. CMB: A Comprehensive Medical Benchmark in Chinese. arXiv preprint arXiv:2308.08833 (2023). [212] Xuena Wang, Xueting Li, Zi Yin, Yue Wu, and Liu Jia. 2023. Emotional Intelligence of Large Language Models. arXiv:2307.09042 [cs.AI]
2307.03109#192
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
193
[213] Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, and Heng Ji. 2023. MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback. arXiv preprint arXiv:2309.10691 (2023). [214] Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. 2021. Codet5: Identifier-aware unified pre-trained encoder- decoder models for code understanding and generation. arXiv preprint arXiv:2109.00859 (2021). [215] Yidong Wang, Zhuohao Yu, Jindong Wang, Qiang Heng, Hao Chen, Wei Ye, Rui Xie, Xing Xie, and Shikun Zhang.
2307.03109#193
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
194
2023. Exploring Vision-Language Models for Imbalanced Learning. arXiv preprint arXiv:2304.01457 (2023). [216] Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. 2023. PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. arXiv preprint arXiv:2306.05087 (2023). [217] Zhuo Wang, Rongzhen Li, Bowen Dong, Jie Wang, Xiuxing Li, Ning Liu, Chenhui Mao, Wei Zhang, Liling Dong, Jing Gao, et al. 2023. Can LLMs like GPT-4 outperform traditional AI tools in dementia diagnosis? Maybe, but not today. arXiv preprint arXiv:2306.01499 (2023). [218] Zengzhi Wang, Qiming Xie, Zixiang Ding, Yi Feng, and Rui Xia. 2023. Is ChatGPT a Good Sentiment Analyzer? A Preliminary Study. arXiv:2304.04339 [cs.CL]
2307.03109#194
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
195
[219] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682 (2022). [220] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed Huai hsin Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emergent Abilities of Large Language Models. Trans. Mach. Learn. Res. 2022 (2022). [221] Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, and Bin Wang. 2023. CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? arXiv:2306.16636 [cs.CL]
2307.03109#195
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
196
[222] Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer- Smith, and Douglas C Schmidt. 2023. A prompt pattern catalog to enhance prompt engineering with chatgpt. arXiv preprint arXiv:2302.11382 (2023). [223] Tzu-Tsung Wong. 2015. Performance evaluation of classification algorithms by k-fold and leave-one-out cross validation. Pattern Recognition 48, 9 (2015), 2839–2846. [224] Patrick Y Wu, Joshua A Tucker, Jonathan Nagler, and Solomon Messing. 2023. Large Language Models Can Be Used to Estimate the Ideologies of Politicians in a Zero-Shot Learning Setting. arXiv preprint arXiv:2303.12057 (2023). [225] Yiran Wu, Feiran Jia, Shaokun Zhang, Qingyun Wu, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, and Chi Wang. 2023. An Empirical Study on Challenging Math Problem Solving with GPT-4. arXiv preprint arXiv:2306.01337 (2023).
2307.03109#196
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
197
[226] Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Rabe, Charles Staats, Mateja Jamnik, and Christian Szegedy. 2022. Autoformalization with large language models. Advances in Neural Information Processing Systems 35 (2022), 32353–32368. [227] Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, and Yoon Kim. 2023. Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks. arXiv preprint arXiv:2307.02477 (2023). [228] Qiming Xie, Zengzhi Wang, Yi Feng, and Rui Xia. 2023. Ask Again, Then Fail: Large Language Models’ Vacillations in Judgement. arXiv:2310.02174 [cs.CL] [229] Fangzhi Xu, Qika Lin, Jiawei Han, Tianzhe Zhao, Jun Liu, and Erik Cambria. 2023. Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views. arXiv preprint arXiv:2306.09841 (2023).
2307.03109#197
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
198
[230] Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, and Jingren Zhou. 2023. CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility. arXiv:2307.09705 [cs.CL] J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:43 111:43 111:44 111:44 Chang et al. [231] Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo. 2023. LVLM-eHub: A Comprehensive Evaluation Benchmark for Large Vision-Language Models. arXiv:2306.09265 [cs.CV] [232] Ruiyun Xu, Yue Feng, and Hailiang Chen. 2023. ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience. arXiv preprint arXiv:2307.01135 (2023).
2307.03109#198
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
199
[233] Kai-Cheng Yang and Filippo Menczer. 2023. Large language models can rate news outlet credibility. arXiv preprint arXiv:2304.00228 (2023). [234] Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, and Yue Zhang. 2022. Glue-x: Evaluating natural language understanding models from an out-of-distribution generalization perspective. arXiv preprint arXiv:2211.08073 (2022). [235] Zhenfei Yin, Jiong Wang, Jianjian Cao, Zhelun Shi, Dingning Liu, Mukai Li, Lu Sheng, Lei Bai, Xiaoshui Huang, Zhiyong Wang, et al. 2023. LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark. arXiv preprint arXiv:2306.06687 (2023).
2307.03109#199
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
200
[236] Jifan Yu, Xiaozhi Wang, Shangqing Tu, Shulin Cao, Daniel Zhang-Li, Xin Lv, Hao Peng, Zijun Yao, Xiaohan Zhang, Hanming Li, et al. 2023. KoLA: Carefully Benchmarking World Knowledge of Large Language Models. arXiv preprint arXiv:2306.09296 (2023). [237] Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023. MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models. arXiv preprint arXiv:2309.12284 (2023).
2307.03109#200
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
201
[238] Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. 2023. Mm-vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490 (2023). [239] Lifan Yuan, Yangyi Chen, Ganqu Cui, Hongcheng Gao, Fangyuan Zou, Xingyi Cheng, Heng Ji, Zhiyuan Liu, and Maosong Sun. 2023. Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations. arXiv:2306.04618 [cs.CL] [240] Zheng Yuan, Fajie Yuan, Yu Song, Youhua Li, Junchen Fu, Fei Yang, Yunzhu Pan, and Yongxin Ni. 2023. Where to Go Next for Recommender Systems? ID- vs. Modality-based Recommender Models Revisited. arXiv:2303.13835 [cs.IR] [241] Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, and Songfang Huang. 2023. How well do Large Language
2307.03109#201
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
202
Models perform in Arithmetic tasks? arXiv preprint arXiv:2304.02015 (2023). [242] Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In International conference on machine learning. PMLR, 325–333. [243] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414 (2022). [244] Beichen Zhang, Kun Zhou, Xilin Wei, Wayne Xin Zhao, Jing Sha, Shijin Wang, and Ji-Rong Wen. 2023. Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning. arXiv preprint arXiv:2306.02408 (2023). [245] Beichen Zhang, Kun Zhou, Xilin Wei, Wayne Xin Zhao, Jing Sha, Shijin Wang, and Ji-Rong Wen. 2023. Evaluating
2307.03109#202
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
203
[245] Beichen Zhang, Kun Zhou, Xilin Wei, Wayne Xin Zhao, Jing Sha, Shijin Wang, and Ji-Rong Wen. 2023. Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning. arXiv preprint arXiv:2306.02408 (2023). [246] Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. Is ChatGPT Fair for Rec- ommendation? Evaluating Fairness in Large Language Model Recommendation. arXiv preprint arXiv:2305.07609 (2023). [247] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 (2022).
2307.03109#203
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
204
[248] Sarah J Zhang, Samuel Florin, Ariel N Lee, Eamon Niknafs, Andrei Marginean, Annie Wang, Keith Tyser, Zad Chin, Yann Hicke, Nikhil Singh, et al. 2023. Exploring the MIT Mathematics and EECS Curriculum Using Large Language Models. arXiv preprint arXiv:2306.08997 (2023). [249] Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675 (2019). [250] Wenxuan Zhang, Sharifah Mahani Aljunied, Chang Gao, Yew Ken Chia, and Lidong Bing. 2023. M3Exam: A Multi- lingual, Multimodal, Multilevel Benchmark for Examining Large Language Models. arXiv preprint arXiv:2306.05179 (2023). [251] Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, and Lidong Bing. 2023. Sentiment Analysis in the Era of Large Language Models: A Reality Check. arXiv preprint arXiv:2305.15005 (2023).
2307.03109#204
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
205
[252] Xinghua Zhang, Bowen Yu, Haiyang Yu, Yangyu Lv, Tingwen Liu, Fei Huang, Hongbo Xu, and Yongbin Li. 2023. Wider and deeper llm networks are fairer llm evaluators. arXiv preprint arXiv:2308.01862 (2023). [253] Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, and Shuming Shi. 2023. Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models. arXiv:2309.01219 [cs.CL] J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. # A Survey on Evaluation of Large Language Models [254] Zhexin Zhang, Leqi Lei, Lindong Wu, Rui Sun, Yongkang Huang, Chong Long, Xiao Liu, Xuanyu Lei, Jie Tang, and Minlie Huang. 2023. SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions. arXiv preprint arXiv:2309.07045 (2023).
2307.03109#205
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
206
[255] Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, and Baobao Chang. 2023. MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning. arXiv preprint arXiv:2309.07915 (2023). [256] Jiaxu Zhao, Meng Fang, Zijing Shi, Yitong Li, Ling Chen, and Mykola Pechenizkiy. 2023. CHBias: Bias Evaluation and Mitigation of Chinese Conversational Language Models. arXiv:2305.11262 [cs.CL] [257] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223 (2023). [258] Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Chongxuan Li, Ngai-Man Cheung, and Min Lin. 2023. On Evaluating
2307.03109#206
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
207
Adversarial Robustness of Large Vision-Language Models. arXiv preprint arXiv:2305.16934 (2023). [259] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Tianle Li, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zhuohan Li, Zi Lin, Eric Xing, et al. 2023. LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset. arXiv preprint arXiv:2309.11998 (2023). [260] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-judge with MT-Bench and Chatbot Arena. arXiv:2306.05685 [cs.CL]
2307.03109#207
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
208
[261] Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. 2022. Towards a unified multi-dimensional evaluator for text generation. arXiv preprint arXiv:2210.07197 (2022). [262] Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364 (2023). [263] Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2022. Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910 (2022).
2307.03109#208
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
209
[264] Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, et al. 2023. PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts. arXiv preprint arXiv:2306.04528 (2023). [265] Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, et al. 2023. Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective. arXiv preprint arXiv:2306.10512 (2023). [266] Terry Yue Zhuo, Yujin Huang, Chunyang Chen, and Zhenchang Xing. 2023. Exploring ai ethics of chatgpt: A diagnostic analysis. arXiv preprint arXiv:2301.12867 (2023).
2307.03109#209
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
210
[267] Terry Yue Zhuo, Zhuang Li, Yujin Huang, Yuan-Fang Li, Weiqing Wang, Gholamreza Haffari, and Fatemeh Shiri. 2023. On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex. arXiv preprint arXiv:2301.12868 (2023). [268] Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593 (2019). [269] Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, and Diyi Yang. 2023. Can Large Language Models Transform Computational Social Science? arXiv preprint arXiv:2305.03514 (2023). Received 20 February 2007; revised 12 March 2009; accepted 5 June 2009 J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:45
2307.03109#210
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02046
1
Abstract—With the prosperity of e-commerce and web applications, Recommender Systems (RecSys) have become an important component in our daily life, providing personalized suggestions that cater to user preferences. While Deep Neural Networks (DNNs) have made significant advancements in enhancing recommender systems by modeling user-item interactions and incorporating their textual side information, these DNN-based methods still have some limitations, such as difficulties in effectively understanding users’ interests and capturing textual side information, inabilities in generalizing to various seen/unseen recommendation scenarios and reasoning on their predictions, etc. Meanwhile, the emergence of Large Language Models (LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI), due to their remarkable abilities in fundamental responsibilities of language understanding and generation, as well as impressive generalization and reasoning capabilities. As a result, recent studies have attempted to harness the power of LLMs to enhance recommender systems. Given the rapid evolution of this research direction in recommender systems, there is a pressing need for a systematic overview that summarizes existing
2307.02046#1
Recommender Systems in the Era of Large Language Models (LLMs)
With the prosperity of e-commerce and web applications, Recommender Systems (RecSys) have become an important component of our daily life, providing personalized suggestions that cater to user preferences. While Deep Neural Networks (DNNs) have made significant advancements in enhancing recommender systems by modeling user-item interactions and incorporating textual side information, DNN-based methods still face limitations, such as difficulties in understanding users' interests and capturing textual side information, inabilities in generalizing to various recommendation scenarios and reasoning on their predictions, etc. Meanwhile, the emergence of Large Language Models (LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI), due to their remarkable abilities in fundamental responsibilities of language understanding and generation, as well as impressive generalization and reasoning capabilities. As a result, recent studies have attempted to harness the power of LLMs to enhance recommender systems. Given the rapid evolution of this research direction in recommender systems, there is a pressing need for a systematic overview that summarizes existing LLM-empowered recommender systems, to provide researchers in relevant fields with an in-depth understanding. Therefore, in this paper, we conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting. More specifically, we first introduce representative methods to harness the power of LLMs (as a feature encoder) for learning representations of users and items. Then, we review recent techniques of LLMs for enhancing recommender systems from three paradigms, namely pre-training, fine-tuning, and prompting. Finally, we comprehensively discuss future directions in this emerging field.
http://arxiv.org/pdf/2307.02046
Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li
cs.IR, cs.AI, cs.CL
16 pages, 5 figures
null
cs.IR
20230705
20230805
[ { "id": "2201.11903" }, { "id": "2305.05973" }, { "id": "2010.15980" }, { "id": "2307.09688" }, { "id": "2307.07171" }, { "id": "2305.15498" }, { "id": "2305.02182" }, { "id": "2305.12090" }, { "id": "2305.07609" }, { "id": "2304.03516" }, { "id": "2303.14524" }, { "id": "2305.15673" }, { "id": "2301.00234" }, { "id": "2305.13112" }, { "id": "2307.10747" }, { "id": "2302.02591" }, { "id": "2305.15062" }, { "id": "2307.15780" }, { "id": "2303.13835" }, { "id": "2307.05722" }, { "id": "2305.07001" }, { "id": "2303.17564" }, { "id": "2305.11700" }, { "id": "2304.03879" }, { "id": "2206.08082" }, { "id": "2305.05065" }, { "id": "2305.00447" }, { "id": "2302.05729" }, { "id": "2304.10149" }, { "id": "2304.01097" }, { "id": "2306.05817" }, { "id": "2304.03153" }, { "id": "2304.04218" }, { "id": "2301.11489" }, { "id": "2305.06569" }, { "id": "2206.06190" }, { "id": "2307.02157" }, { "id": "2305.19860" }, { "id": "2305.15756" }, { "id": "2305.07633" }, { "id": "2305.16582" }, { "id": "2305.08845" }, { "id": "2307.03393" }, { "id": "2304.11116" }, { "id": "2306.06031" }, { "id": "2303.18223" }, { "id": "2305.15036" }, { "id": "2305.17812" }, { "id": "2010.01494" }, { "id": "2205.09666" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2106.00573" }, { "id": "2305.11255" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.06566" }, { "id": "2306.17256" }, { "id": "2305.06212" }, { "id": "2306.02552" }, { "id": "2305.07961" }, { "id": "2203.11171" }, { "id": "2301.12867" }, { "id": "2305.04518" }, { "id": "2305.14552" }, { "id": "2112.08633" }, { "id": "2307.14225" }, { "id": "1511.06939" }, { "id": "2012.15723" }, { "id": "2303.08896" }, { "id": "2306.06615" }, { "id": "2305.15075" }, { "id": "2305.09858" }, { "id": "2209.10117" }, { "id": "2305.06474" }, { "id": "2201.08239" }, { "id": "2302.03735" }, { "id": "2109.01652" }, { "id": "2305.07622" }, { "id": "2306.10933" } ]
2307.02477
1
Language Models Through Counterfactual Tasks Zhaofeng Wu® ~~ Linlu Qiu® ~— Alexis Ross® = Ekin Akyiirek® Boyuan Chen® Bailin Wang® Najoung Kim? Jacob Andreas® Yoon Kim® ®MIT “Boston University [email protected] Arithmetic Code Exec. Code Gen. Basic Syntax Logic random ; : : : 2 GPT-4 as as as Lee! ieee) Performance = = = Lee eee 0 100 AR SrsSDS, ba"), Sort list by the Find the main IfX are Y,Y are Z. 5 keyetéabda x} x€x1, second element subject and verb Are XZ? in base-10 in Python in Python “They think LMs are X= corgis Default the best” in Y= sorted subj-verb-obj order Tis 3 pei x: xf], 6 a [ba”, “ab” 5 (they, think) Yes Counterfactual in base-9 w/1-based indexing w/1-based indexing iets ates Verb-obj-subj order Z= plants - @ list, keyslonbda x: x{21, ‘a DD 100 [“ab”, “ba”] eye (they, think)
2307.02477#1
Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks
The impressive performance of recent language models across a wide range of tasks suggests that they possess a degree of abstract reasoning skills. Are these skills general and transferable, or specialized to specific tasks seen during pretraining? To disentangle these effects, we propose an evaluation framework based on "counterfactual" task variants that deviate from the default assumptions underlying standard tasks. Across a suite of 11 tasks, we observe nontrivial performance on the counterfactual variants, but nevertheless find that performance substantially and consistently degrades compared to the default conditions. This suggests that while current LMs may possess abstract task-solving skills to a degree, they often also rely on narrow, non-transferable procedures for task-solving. These results motivate a more careful interpretation of language model performance that teases apart these aspects of behavior.
http://arxiv.org/pdf/2307.02477
Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim
cs.CL, cs.AI
null
null
cs.CL
20230705
20230801
[]
2307.02485
1
1University of Massachusetts Amherst, 2 Tsinghua University, 3Shanghai Jiao Tong University, 4MIT, 5MIT-IBM Watson AI Lab # Abstract Large Language Models (LLMs) have demonstrated impressive planning abilities in single-agent embodied tasks across various domains. However, their capacity for planning and communication in multi-agent cooperation remains unclear, even though these are crucial skills for intelligent embodied agents. In this paper, we present a novel framework that utilizes LLMs for multi-agent cooperation and tests it in various embodied environments. Our framework enables embodied agents to plan, communicate, and cooperate with other embodied agents or humans to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs, such as GPT-4, can surpass strong planning-based methods and exhibit emergent effective communication using our framework without requiring fine-tuning or few-shot prompting. We also discover that LLM-based agents that communicate in natural language can earn more trust and cooperate more effectively with humans. Our research underscores the potential of LLMs for embodied AI and lays the foundation for future research in multi-agent cooperation. Videos can be found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. # Introduction
2307.02485#1
Building Cooperative Embodied Agents Modularly with Large Language Models
Large Language Models (LLMs) have demonstrated impressive planning abilities in single-agent embodied tasks across various domains. However, their capacity for planning and communication in multi-agent cooperation remains unclear, even though these are crucial skills for intelligent embodied agents. In this paper, we present a novel framework that utilizes LLMs for multi-agent cooperation and tests it in various embodied environments. Our framework enables embodied agents to plan, communicate, and cooperate with other embodied agents or humans to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs, such as GPT-4, can surpass strong planning-based methods and exhibit emergent effective communication using our framework without requiring fine-tuning or few-shot prompting. We also discover that LLM-based agents that communicate in natural language can earn more trust and cooperate more effectively with humans. Our research underscores the potential of LLMs for embodied AI and lays the foundation for future research in multi-agent cooperation. Videos can be found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/.
http://arxiv.org/pdf/2307.02485
Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan
cs.AI, cs.CL, cs.CV
Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/
null
cs.AI
20230705
20230705
[ { "id": "2211.09935" }, { "id": "1712.05474" }, { "id": "2007.04954" }, { "id": "2210.04964" }, { "id": "1909.07528" }, { "id": "1903.00784" }, { "id": "1711.11017" }, { "id": "2201.11903" }, { "id": "2305.02412" }, { "id": "2212.08681" }, { "id": "2110.01517" }, { "id": "1809.00786" }, { "id": "1809.07124" }, { "id": "2303.03378" }, { "id": "2210.06849" }, { "id": "2305.05252" }, { "id": "2302.14045" }, { "id": "1810.00147" }, { "id": "2011.01975" }, { "id": "2209.07753" }, { "id": "2303.04129" }, { "id": "2301.05223" }, { "id": "2205.11916" }, { "id": "2206.08916" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2212.04088" } ]
2307.02486
1
# Abstract Scaling sequence length has become a critical demand in the era of large language models. However, existing methods struggle with either computational complexity or model expressivity, rendering the maximum sequence length restricted. To address this issue, we introduce LONGNET, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. Specifically, we propose dilated attention, which expands the attentive field exponentially as the distance grows. LONGNET has significant advantages: 1) it has a linear computation complexity and a logarithm depen- dency between any two tokens in a sequence; 2) it can be served as a distributed trainer for extremely long sequences; 3) its dilated attention is a drop-in replace- ment for standard attention, which can be seamlessly integrated with the existing Transformer-based optimization. Experiments results demonstrate that LONGNET yields strong performance on both long-sequence modeling and general language tasks. Our work opens up new possibilities for modeling very long sequences, e.g., treating a whole corpus or even the entire Internet as a sequence. Code is available at https://aka.ms/LongNet.
2307.02486#1
LongNet: Scaling Transformers to 1,000,000,000 Tokens
Scaling sequence length has become a critical demand in the era of large language models. However, existing methods struggle with either computational complexity or model expressivity, rendering the maximum sequence length restricted. To address this issue, we introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. Specifically, we propose dilated attention, which expands the attentive field exponentially as the distance grows. LongNet has significant advantages: 1) it has a linear computation complexity and a logarithm dependency between any two tokens in a sequence; 2) it can be served as a distributed trainer for extremely long sequences; 3) its dilated attention is a drop-in replacement for standard attention, which can be seamlessly integrated with the existing Transformer-based optimization. Experiments results demonstrate that LongNet yields strong performance on both long-sequence modeling and general language tasks. Our work opens up new possibilities for modeling very long sequences, e.g., treating a whole corpus or even the entire Internet as a sequence.
http://arxiv.org/pdf/2307.02486
Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei
cs.CL, cs.LG
Work in progress
null
cs.CL
20230705
20230719
[]
2307.03692
1
Parikshith Kulkarni Melisa Russak Writer, Inc. {waseem,...,melisa}@writer.com # Abstract In this paper, we introduce the Instruction Following Score (IFS), a metric that detects language models’ ability to follow instructions. The metric has a dual purpose. First, IFS can be used to distinguish between base and instruct models. We benchmark publicly available base and instruct models, and show that the ratio of well formatted responses to partial and full sentences can be an effective measure between those two model classes. Secondly, the metric can be used as an early stopping criteria for instruct tuning. We compute IFS for Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models learn to follow instructions relatively early in the training process, and the further finetuning can result in changes in the underlying base model semantics. As an example of semantics change we show the objectivity of model predictions, as defined by an auxiliary metric ObjecQA. We show that in this particular case, semantic changes are the steepest when the IFS tends to plateau. We hope that decomposing instruct tuning into IFS and semantic factors starts a new trend in better controllable instruct tuning and opens possibilities for designing minimal instruct interfaces querying foundation models. # Introduction which result in a plethora of possible combina- tions leading to distinct instruct models.
2307.03692#1
Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning
In this paper, we introduce the Instruction Following Score (IFS), a metric that detects language models' ability to follow instructions. The metric has a dual purpose. First, IFS can be used to distinguish between base and instruct models. We benchmark publicly available base and instruct models, and show that the ratio of well formatted responses to partial and full sentences can be an effective measure between those two model classes. Secondly, the metric can be used as an early stopping criteria for instruct tuning. We compute IFS for Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models learn to follow instructions relatively early in the training process, and the further finetuning can result in changes in the underlying base model semantics. As an example of semantics change we show the objectivity of model predictions, as defined by an auxiliary metric ObjecQA. We show that in this particular case, semantic changes are the steepest when the IFS tends to plateau. We hope that decomposing instruct tuning into IFS and semantic factors starts a new trend in better controllable instruct tuning and opens possibilities for designing minimal instruct interfaces querying foundation models.
http://arxiv.org/pdf/2307.03692
Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak
cs.CL, cs.AI
null
null
cs.CL
20230705
20230705
[ { "id": "2101.00027" } ]
2307.02046
2
enhance recommender systems. Given the rapid evolution of this research direction in recommender systems, there is a pressing need for a systematic overview that summarizes existing LLM-empowered recommender systems, so as to provide researchers and practitioners in relevant fields with an in-depth understanding. Therefore, in this survey, we conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting. More specifically, we first introduce representative methods to harness the power of LLMs (as a feature encoder) for learning representations of users and items. Then, we review recent advanced techniques of LLMs for enhancing recommender systems from three paradigms, namely pre-training, fine-tuning, and prompting. Finally, we comprehensively discuss the promising future directions in this emerging field.
2307.02046#2
Recommender Systems in the Era of Large Language Models (LLMs)
With the prosperity of e-commerce and web applications, Recommender Systems (RecSys) have become an important component of our daily life, providing personalized suggestions that cater to user preferences. While Deep Neural Networks (DNNs) have made significant advancements in enhancing recommender systems by modeling user-item interactions and incorporating textual side information, DNN-based methods still face limitations, such as difficulties in understanding users' interests and capturing textual side information, inabilities in generalizing to various recommendation scenarios and reasoning on their predictions, etc. Meanwhile, the emergence of Large Language Models (LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI), due to their remarkable abilities in fundamental responsibilities of language understanding and generation, as well as impressive generalization and reasoning capabilities. As a result, recent studies have attempted to harness the power of LLMs to enhance recommender systems. Given the rapid evolution of this research direction in recommender systems, there is a pressing need for a systematic overview that summarizes existing LLM-empowered recommender systems, to provide researchers in relevant fields with an in-depth understanding. Therefore, in this paper, we conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting. More specifically, we first introduce representative methods to harness the power of LLMs (as a feature encoder) for learning representations of users and items. Then, we review recent techniques of LLMs for enhancing recommender systems from three paradigms, namely pre-training, fine-tuning, and prompting. Finally, we comprehensively discuss future directions in this emerging field.
http://arxiv.org/pdf/2307.02046
Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li
cs.IR, cs.AI, cs.CL
16 pages, 5 figures
null
cs.IR
20230705
20230805
[ { "id": "2201.11903" }, { "id": "2305.05973" }, { "id": "2010.15980" }, { "id": "2307.09688" }, { "id": "2307.07171" }, { "id": "2305.15498" }, { "id": "2305.02182" }, { "id": "2305.12090" }, { "id": "2305.07609" }, { "id": "2304.03516" }, { "id": "2303.14524" }, { "id": "2305.15673" }, { "id": "2301.00234" }, { "id": "2305.13112" }, { "id": "2307.10747" }, { "id": "2302.02591" }, { "id": "2305.15062" }, { "id": "2307.15780" }, { "id": "2303.13835" }, { "id": "2307.05722" }, { "id": "2305.07001" }, { "id": "2303.17564" }, { "id": "2305.11700" }, { "id": "2304.03879" }, { "id": "2206.08082" }, { "id": "2305.05065" }, { "id": "2305.00447" }, { "id": "2302.05729" }, { "id": "2304.10149" }, { "id": "2304.01097" }, { "id": "2306.05817" }, { "id": "2304.03153" }, { "id": "2304.04218" }, { "id": "2301.11489" }, { "id": "2305.06569" }, { "id": "2206.06190" }, { "id": "2307.02157" }, { "id": "2305.19860" }, { "id": "2305.15756" }, { "id": "2305.07633" }, { "id": "2305.16582" }, { "id": "2305.08845" }, { "id": "2307.03393" }, { "id": "2304.11116" }, { "id": "2306.06031" }, { "id": "2303.18223" }, { "id": "2305.15036" }, { "id": "2305.17812" }, { "id": "2010.01494" }, { "id": "2205.09666" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2106.00573" }, { "id": "2305.11255" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.06566" }, { "id": "2306.17256" }, { "id": "2305.06212" }, { "id": "2306.02552" }, { "id": "2305.07961" }, { "id": "2203.11171" }, { "id": "2301.12867" }, { "id": "2305.04518" }, { "id": "2305.14552" }, { "id": "2112.08633" }, { "id": "2307.14225" }, { "id": "1511.06939" }, { "id": "2012.15723" }, { "id": "2303.08896" }, { "id": "2306.06615" }, { "id": "2305.15075" }, { "id": "2305.09858" }, { "id": "2209.10117" }, { "id": "2305.06474" }, { "id": "2201.08239" }, { "id": "2302.03735" }, { "id": "2109.01652" }, { "id": "2305.07622" }, { "id": "2306.10933" } ]
2307.02053
2
# Abstract Recently, the release of INSTRUCTEVAL [Chia et al., 2023] has provided valuable insights into the performance of large language models (LLMs) that utilize encoder- decoder or decoder-only architecture. Interestingly, despite being introduced four years ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general problem-solving skills. This performance discrepancy can be attributed to three key factors: (1) Pre-training data, (2) Backbone architecture, and (3) Instruction dataset. In this technical report, our main focus is on investigating the impact of the third factor by leveraging VICUNA, a large language model based on LLAMA, which has undergone fine-tuning on ChatGPT conversations. To achieve this objective, we fine-tuned VICUNA using a customized instruction dataset collection called FLAN-MINI. This collection includes a subset of the large-scale instruction dataset known as FLAN, as well as various code-related datasets and conversational datasets derived from ChatGPT/GPT-4. This dataset comprises a large number of tasks that Preprint. Under review.
2307.02053#2
Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning
Recently, the release of INSTRUCTEVAL has provided valuable insights into the performance of large language models (LLMs) that utilize encoder-decoder or decoder-only architecture. Interestingly, despite being introduced four years ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general problem-solving skills. This performance discrepancy can be attributed to three key factors: (1) Pre-training data, (2) Backbone architecture, and (3) Instruction dataset. In this technical report, our main focus is on investigating the impact of the third factor by leveraging VICUNA, a large language model based on LLAMA, which has undergone fine-tuning on ChatGPT conversations. To achieve this objective, we fine-tuned VICUNA using a customized instruction dataset collection called FLANMINI. This collection includes a subset of the large-scale instruction dataset known as FLAN, as well as various code-related datasets and conversational datasets derived from ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand problem-solving skills. Our experimental findings strongly indicate that the enhanced problem-solving abilities of our model, FLACUNA, are obtained through fine-tuning VICUNA on the FLAN dataset, leading to significant improvements across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly available at https://huggingface.co/declare-lab/flacuna-13b-v1.0.
http://arxiv.org/pdf/2307.02053
Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria
cs.CL
null
null
cs.CL
20230705
20230705
[ { "id": "2301.13688" }, { "id": "2106.09685" }, { "id": "2203.07814" }, { "id": "1909.09436" } ]
2307.02477
2
plants - @ list, keyslonbda x: x{21, ‘a DD 100 [“ab”, “ba”] eye (they, think) Yes Spatial Drawing Chord Fingering Note in Melody Chess SET Game H = 2 Lee ees Li eee , ; sor tri The 4th note of le = 2 Coordinates of Draw a bubble tea Play C major triad Pear Is the move legal? Ie S N ona guitar in C major st Rule: Either identi- : : a 4 cal or all different in 3 Boe color, number, shape, & shading Wee th prea Gi s ai ii fe azaeos (0) les i Yes Yes N rotated 180° with A string tuned in A> major re Rule (additional): toaCand D toanF ADA Except, for number 2 Wa h-E perin rr cards should be the (** tr) same while 1 differs (4,0) . é E i No No
2307.02477#2
Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks
The impressive performance of recent language models across a wide range of tasks suggests that they possess a degree of abstract reasoning skills. Are these skills general and transferable, or specialized to specific tasks seen during pretraining? To disentangle these effects, we propose an evaluation framework based on "counterfactual" task variants that deviate from the default assumptions underlying standard tasks. Across a suite of 11 tasks, we observe nontrivial performance on the counterfactual variants, but nevertheless find that performance substantially and consistently degrades compared to the default conditions. This suggests that while current LMs may possess abstract task-solving skills to a degree, they often also rely on narrow, non-transferable procedures for task-solving. These results motivate a more careful interpretation of language model performance that teases apart these aspects of behavior.
http://arxiv.org/pdf/2307.02477
Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim
cs.CL, cs.AI
null
null
cs.CL
20230705
20230801
[]
2307.02485
2
# Introduction Large Language Models (LLMs) have exhibited remarkable capabilities across various domains, implying their mastery of natural language understanding, rich world knowledge, and complex reasoning capability[6]. Recent research has also demonstrated that LLMs can function as planners in single-agent embodied tasks through zero-shot prompting for instruction following tasks [15] or few-shot prompting for more complex long-horizon tasks [44]. However, for embodied agents to work with other agents or with humans, they also need to have strong abilities for cooperation and communication. To date, it still remains unclear whether LLMs have such abilities necessary for embodied multi-agent cooperation. Therefore, this paper aims to investigate whether LLMs can help build cooperative embodied agents that can collaborate with other agents and humans to accomplish complex tasks through collaborative planning and communication. To this end, we focus on an embodied multi-agent setting as shown in Figure 1, where two embodied agents have to cooperate to finish a task as soon as possible. To succeed in this setting, agents must i) extract useful information from observations, ii) revise their beliefs about the world and other agents, iii) decide what and when to communicate, and iv) plan collaboratively to reach the common goal. To achieve these goals, we introduce a novel framework that utilizes LLMs to plan and communicate with other agents to cooperatively solve complex embodied tasks without any fine-tuning or few-shot denotes equal contribution.
2307.02485#2
Building Cooperative Embodied Agents Modularly with Large Language Models
Large Language Models (LLMs) have demonstrated impressive planning abilities in single-agent embodied tasks across various domains. However, their capacity for planning and communication in multi-agent cooperation remains unclear, even though these are crucial skills for intelligent embodied agents. In this paper, we present a novel framework that utilizes LLMs for multi-agent cooperation and tests it in various embodied environments. Our framework enables embodied agents to plan, communicate, and cooperate with other embodied agents or humans to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs, such as GPT-4, can surpass strong planning-based methods and exhibit emergent effective communication using our framework without requiring fine-tuning or few-shot prompting. We also discover that LLM-based agents that communicate in natural language can earn more trust and cooperate more effectively with humans. Our research underscores the potential of LLMs for embodied AI and lays the foundation for future research in multi-agent cooperation. Videos can be found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/.
http://arxiv.org/pdf/2307.02485
Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan
cs.AI, cs.CL, cs.CV
Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/
null
cs.AI
20230705
20230705
[ { "id": "2211.09935" }, { "id": "1712.05474" }, { "id": "2007.04954" }, { "id": "2210.04964" }, { "id": "1909.07528" }, { "id": "1903.00784" }, { "id": "1711.11017" }, { "id": "2201.11903" }, { "id": "2305.02412" }, { "id": "2212.08681" }, { "id": "2110.01517" }, { "id": "1809.00786" }, { "id": "1809.07124" }, { "id": "2303.03378" }, { "id": "2210.06849" }, { "id": "2305.05252" }, { "id": "2302.14045" }, { "id": "1810.00147" }, { "id": "2011.01975" }, { "id": "2209.07753" }, { "id": "2303.04129" }, { "id": "2301.05223" }, { "id": "2205.11916" }, { "id": "2206.08916" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2212.04088" } ]
2307.03692
2
# Introduction which result in a plethora of possible combina- tions leading to distinct instruct models. Large Language Models (LLMs) finetuned on in- struct data can behave like conversational agents (Alpaca: Taori et al. 2023, Self-Instruct: Wang et al. 2023). The recipe for a chat model is well- defined: one needs to perform instruction tuning, which means supervised finetuning (SFT) of an LLM on tuples of instruction and response (Long- pre et al. 2023). Open-source datasets vary in quality and quantity, ranging from 1k examples (Zhou et al. 2023) to over 800k examples (Anand et al. 2023). In addi- tion, there are more than a dozen open-source base LLMs, such as LLaMA (Touvron et al. 2023), OPT (Zhang et al. 2022), GPT-Neo (Gao et al. 2020), Palmyra (Writer 2023), and others,
2307.03692#2
Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning
In this paper, we introduce the Instruction Following Score (IFS), a metric that detects language models' ability to follow instructions. The metric has a dual purpose. First, IFS can be used to distinguish between base and instruct models. We benchmark publicly available base and instruct models, and show that the ratio of well formatted responses to partial and full sentences can be an effective measure between those two model classes. Secondly, the metric can be used as an early stopping criteria for instruct tuning. We compute IFS for Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models learn to follow instructions relatively early in the training process, and the further finetuning can result in changes in the underlying base model semantics. As an example of semantics change we show the objectivity of model predictions, as defined by an auxiliary metric ObjecQA. We show that in this particular case, semantic changes are the steepest when the IFS tends to plateau. We hope that decomposing instruct tuning into IFS and semantic factors starts a new trend in better controllable instruct tuning and opens possibilities for designing minimal instruct interfaces querying foundation models.
http://arxiv.org/pdf/2307.03692
Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak
cs.CL, cs.AI
null
null
cs.CL
20230705
20230705
[ { "id": "2101.00027" } ]
2307.02046
3
Index Terms—Recommender Systems, Large Language Models (LLMs), Pre-training and Fine-tuning, In-context Learning, Prompting. ✦ # 1 INTRODUCTION alleviating information overload for enriching users’ online experience (i.e., users need to filter overwhelming information to locate their interested information) [1], [2]. They offer personalized suggestions towards candidate items tailored to meet user preferences in various application domains, such as entertainment [3], e-commerce [4], and job matching [2]. For example, on movie recommendations (e.g., IMDB and Netflix), the latest movies are recommended to users based on the content of movies and the past interaction histories of users, which help users discover new movies that accord with their interests. The basic idea of recommender systems is to make use of the interactions between users and items and their associated side information, especially textual information (e.g., item titles or descriptions, user profiles, and user reviews for items), to predict the matching
2307.02046#3
Recommender Systems in the Era of Large Language Models (LLMs)
With the prosperity of e-commerce and web applications, Recommender Systems (RecSys) have become an important component of our daily life, providing personalized suggestions that cater to user preferences. While Deep Neural Networks (DNNs) have made significant advancements in enhancing recommender systems by modeling user-item interactions and incorporating textual side information, DNN-based methods still face limitations, such as difficulties in understanding users' interests and capturing textual side information, inabilities in generalizing to various recommendation scenarios and reasoning on their predictions, etc. Meanwhile, the emergence of Large Language Models (LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI), due to their remarkable abilities in fundamental responsibilities of language understanding and generation, as well as impressive generalization and reasoning capabilities. As a result, recent studies have attempted to harness the power of LLMs to enhance recommender systems. Given the rapid evolution of this research direction in recommender systems, there is a pressing need for a systematic overview that summarizes existing LLM-empowered recommender systems, to provide researchers in relevant fields with an in-depth understanding. Therefore, in this paper, we conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting. More specifically, we first introduce representative methods to harness the power of LLMs (as a feature encoder) for learning representations of users and items. Then, we review recent techniques of LLMs for enhancing recommender systems from three paradigms, namely pre-training, fine-tuning, and prompting. Finally, we comprehensively discuss future directions in this emerging field.
http://arxiv.org/pdf/2307.02046
Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li
cs.IR, cs.AI, cs.CL
16 pages, 5 figures
null
cs.IR
20230705
20230805
[ { "id": "2201.11903" }, { "id": "2305.05973" }, { "id": "2010.15980" }, { "id": "2307.09688" }, { "id": "2307.07171" }, { "id": "2305.15498" }, { "id": "2305.02182" }, { "id": "2305.12090" }, { "id": "2305.07609" }, { "id": "2304.03516" }, { "id": "2303.14524" }, { "id": "2305.15673" }, { "id": "2301.00234" }, { "id": "2305.13112" }, { "id": "2307.10747" }, { "id": "2302.02591" }, { "id": "2305.15062" }, { "id": "2307.15780" }, { "id": "2303.13835" }, { "id": "2307.05722" }, { "id": "2305.07001" }, { "id": "2303.17564" }, { "id": "2305.11700" }, { "id": "2304.03879" }, { "id": "2206.08082" }, { "id": "2305.05065" }, { "id": "2305.00447" }, { "id": "2302.05729" }, { "id": "2304.10149" }, { "id": "2304.01097" }, { "id": "2306.05817" }, { "id": "2304.03153" }, { "id": "2304.04218" }, { "id": "2301.11489" }, { "id": "2305.06569" }, { "id": "2206.06190" }, { "id": "2307.02157" }, { "id": "2305.19860" }, { "id": "2305.15756" }, { "id": "2305.07633" }, { "id": "2305.16582" }, { "id": "2305.08845" }, { "id": "2307.03393" }, { "id": "2304.11116" }, { "id": "2306.06031" }, { "id": "2303.18223" }, { "id": "2305.15036" }, { "id": "2305.17812" }, { "id": "2010.01494" }, { "id": "2205.09666" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2106.00573" }, { "id": "2305.11255" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.06566" }, { "id": "2306.17256" }, { "id": "2305.06212" }, { "id": "2306.02552" }, { "id": "2305.07961" }, { "id": "2203.11171" }, { "id": "2301.12867" }, { "id": "2305.04518" }, { "id": "2305.14552" }, { "id": "2112.08633" }, { "id": "2307.14225" }, { "id": "1511.06939" }, { "id": "2012.15723" }, { "id": "2303.08896" }, { "id": "2306.06615" }, { "id": "2305.15075" }, { "id": "2305.09858" }, { "id": "2209.10117" }, { "id": "2305.06474" }, { "id": "2201.08239" }, { "id": "2302.03735" }, { "id": "2109.01652" }, { "id": "2305.07622" }, { "id": "2306.10933" } ]
2307.02053
3
Preprint. Under review. demand problem-solving skills. Our experimental findings strongly indicate that the enhanced problem-solving abilities of our model, FLACUNA, are obtained through fine-tuning VICUNA on the FLAN dataset, leading to significant improvements across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. # 1 Introduction
2307.02053#3
Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning
Recently, the release of INSTRUCTEVAL has provided valuable insights into the performance of large language models (LLMs) that utilize encoder-decoder or decoder-only architecture. Interestingly, despite being introduced four years ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general problem-solving skills. This performance discrepancy can be attributed to three key factors: (1) Pre-training data, (2) Backbone architecture, and (3) Instruction dataset. In this technical report, our main focus is on investigating the impact of the third factor by leveraging VICUNA, a large language model based on LLAMA, which has undergone fine-tuning on ChatGPT conversations. To achieve this objective, we fine-tuned VICUNA using a customized instruction dataset collection called FLANMINI. This collection includes a subset of the large-scale instruction dataset known as FLAN, as well as various code-related datasets and conversational datasets derived from ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand problem-solving skills. Our experimental findings strongly indicate that the enhanced problem-solving abilities of our model, FLACUNA, are obtained through fine-tuning VICUNA on the FLAN dataset, leading to significant improvements across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly available at https://huggingface.co/declare-lab/flacuna-13b-v1.0.
http://arxiv.org/pdf/2307.02053
Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria
cs.CL
null
null
cs.CL
20230705
20230705
[ { "id": "2301.13688" }, { "id": "2106.09685" }, { "id": "2203.07814" }, { "id": "1909.09436" } ]
2307.02477
3
Figure 1: GPT-4’s performance on the default version of various tasks (blue) and counterfactual counterparts (orange). The shown results use 0-shot chain-of-thought prompting (§4; Kojima et al., 2023). GPT-4 consistently and substantially underperforms on counterfactual variants compared to default task instantiations. # Abstract # Abstract The impressive performance of recent lan- guage models across a wide range of tasks suggests that they possess a degree of ab- stract reasoning skills. Are these skills gen- eral and transferable, or specialized to spe- cific tasks seen during pretraining? To dis- entangle these effects, we propose an eval- uation framework based on “counterfactual” task variants that deviate from the default as- sumptions underlying standard tasks. Across a suite of 11 tasks, we observe nontrivial per- formance on the counterfactual variants, but nevertheless find that performance substan- tially and consistently degrades compared to the default conditions. This suggests that while current LMs may possess abstract task- solving skills to a degree, they often also rely on narrow, non-transferable procedures for task-solving. These results motivate a more careful interpretation of language model per- formance that teases apart these aspects of behavior. # Introduction
2307.02477#3
Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks
The impressive performance of recent language models across a wide range of tasks suggests that they possess a degree of abstract reasoning skills. Are these skills general and transferable, or specialized to specific tasks seen during pretraining? To disentangle these effects, we propose an evaluation framework based on "counterfactual" task variants that deviate from the default assumptions underlying standard tasks. Across a suite of 11 tasks, we observe nontrivial performance on the counterfactual variants, but nevertheless find that performance substantially and consistently degrades compared to the default conditions. This suggests that while current LMs may possess abstract task-solving skills to a degree, they often also rely on narrow, non-transferable procedures for task-solving. These results motivate a more careful interpretation of language model performance that teases apart these aspects of behavior.
http://arxiv.org/pdf/2307.02477
Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim
cs.CL, cs.AI
null
null
cs.CL
20230705
20230801
[]
2307.02485
3
denotes equal contribution. Preprint. Under review. Alice CL] “Hi, Bob. I L_ | found 3 target objects: fl in the kitchen, a Tremember you were holding an om empty container, Orange Apple can you come here to wera pick them up while I Po go to explore other imal rooms?” Figure 1: We aim to utilize Large Language Models to build cooperative embodied agents. prompting. Our framework consists of five modules, each to address a critical aspect of successful multi-agent cooperation, including a belief module to monitor the agent’s understanding of both the physical environment and other agents, a communication module to decide what to communicate utilizing the strong free-form dialogue generation and understanding capability of LLMs, and a reasoning module to synthesize all the information provided by other modules to decide high-level plans including when to communicate.
2307.02485#3
Building Cooperative Embodied Agents Modularly with Large Language Models
Large Language Models (LLMs) have demonstrated impressive planning abilities in single-agent embodied tasks across various domains. However, their capacity for planning and communication in multi-agent cooperation remains unclear, even though these are crucial skills for intelligent embodied agents. In this paper, we present a novel framework that utilizes LLMs for multi-agent cooperation and tests it in various embodied environments. Our framework enables embodied agents to plan, communicate, and cooperate with other embodied agents or humans to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs, such as GPT-4, can surpass strong planning-based methods and exhibit emergent effective communication using our framework without requiring fine-tuning or few-shot prompting. We also discover that LLM-based agents that communicate in natural language can earn more trust and cooperate more effectively with humans. Our research underscores the potential of LLMs for embodied AI and lays the foundation for future research in multi-agent cooperation. Videos can be found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/.
http://arxiv.org/pdf/2307.02485
Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan
cs.AI, cs.CL, cs.CV
Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/
null
cs.AI
20230705
20230705
[ { "id": "2211.09935" }, { "id": "1712.05474" }, { "id": "2007.04954" }, { "id": "2210.04964" }, { "id": "1909.07528" }, { "id": "1903.00784" }, { "id": "1711.11017" }, { "id": "2201.11903" }, { "id": "2305.02412" }, { "id": "2212.08681" }, { "id": "2110.01517" }, { "id": "1809.00786" }, { "id": "1809.07124" }, { "id": "2303.03378" }, { "id": "2210.06849" }, { "id": "2305.05252" }, { "id": "2302.14045" }, { "id": "1810.00147" }, { "id": "2011.01975" }, { "id": "2209.07753" }, { "id": "2303.04129" }, { "id": "2301.05223" }, { "id": "2205.11916" }, { "id": "2206.08916" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2212.04088" } ]
2307.02486
3
# Millions $ < 5 a Figure 1: Trend of Transformer sequence lengths over time. ∗ Equal contribution. † Corresponding author. 2024 # 1 Introduction + + 20, ZKHB22, Recent years have witnessed a trend toward scaling neural networks [BMR 23]. The depth is primarily scaled up for exponential expressivity, producing CND 22]. Then, the sparse MoE mod- many powerful deep networks [HZRS16, HCB els [LLX 22] efficiently enlarge the hidden dimension. Sequence length, as the last atomic dimension of the neural net- work, is desirable to be unlimited. Breaking the limitation of sequence length introduces significant advantages. First, it provides large memory and receptive field for models, which is practical for them to interact with human and the world. Second, a longer context contains more complex causality and reasoning paths that models can exploit in training data. In contrast, short dependency has more spurious correlations, which is harmful to generalization. Third, it enables to explore the limits of in-context learning, which has the potential to be a paradigm shift for many-shot learning, as an extremely long context may help the models alleviate catastrophic forgetting.
2307.02486#3
LongNet: Scaling Transformers to 1,000,000,000 Tokens
Scaling sequence length has become a critical demand in the era of large language models. However, existing methods struggle with either computational complexity or model expressivity, rendering the maximum sequence length restricted. To address this issue, we introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. Specifically, we propose dilated attention, which expands the attentive field exponentially as the distance grows. LongNet has significant advantages: 1) it has a linear computation complexity and a logarithm dependency between any two tokens in a sequence; 2) it can be served as a distributed trainer for extremely long sequences; 3) its dilated attention is a drop-in replacement for standard attention, which can be seamlessly integrated with the existing Transformer-based optimization. Experiments results demonstrate that LongNet yields strong performance on both long-sequence modeling and general language tasks. Our work opens up new possibilities for modeling very long sequences, e.g., treating a whole corpus or even the entire Internet as a sequence.
http://arxiv.org/pdf/2307.02486
Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei
cs.CL, cs.LG
Work in progress
null
cs.CL
20230705
20230719
[]
2307.03692
3
We can see instruct tuning attempts through the lens of the "imitation models" - concept intro- duced by Gudibande et al. 2023, i.e., efforts to distil closed (and possibly much bigger) propri- etary models like ChatGPT (OpenAI 2022), Bard (Pichai 2023), and Claude (AnthropicAI 2023). Little is known about the qualitative impact of the distillation process on the base model (Hin- ton, Vinyals, and Dean 2015). Imitation success is measured in terms of knowledge (e.g., HELM Liang et al. 2022), skills (e.g., Natural Questions Kwiatkowski et al. 2019) or manual checks based on human preferences (Zhou et al. 2023). There is no consensus whether a manual check that might skew the metric towards style and formatting of responses is a good overall metric (Gudibande Preprint. Under review. et al. 2023). A fairly recent attempt to more ro- bustly evaluate instruct models is the Hugging- face Leaderboard (Huggingface 2023b), which evaluates models against four key benchmarks from the Eleuther AI Language Model Evalua- tion Harness (Gao et al. 2021).
2307.03692#3
Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning
In this paper, we introduce the Instruction Following Score (IFS), a metric that detects language models' ability to follow instructions. The metric has a dual purpose. First, IFS can be used to distinguish between base and instruct models. We benchmark publicly available base and instruct models, and show that the ratio of well formatted responses to partial and full sentences can be an effective measure between those two model classes. Secondly, the metric can be used as an early stopping criteria for instruct tuning. We compute IFS for Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models learn to follow instructions relatively early in the training process, and the further finetuning can result in changes in the underlying base model semantics. As an example of semantics change we show the objectivity of model predictions, as defined by an auxiliary metric ObjecQA. We show that in this particular case, semantic changes are the steepest when the IFS tends to plateau. We hope that decomposing instruct tuning into IFS and semantic factors starts a new trend in better controllable instruct tuning and opens possibilities for designing minimal instruct interfaces querying foundation models.
http://arxiv.org/pdf/2307.03692
Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak
cs.CL, cs.AI
null
null
cs.CL
20230705
20230705
[ { "id": "2101.00027" } ]
2307.02046
4
• W. Fan, Z. Zhao, J. Li, Y. Liu, and Q. Li are with the Department of Computing, The Hong Kong Polytechnic University. E-mail: {wenqifan03, scofield.zzh}@gmail.com, {jiatong.li, yunqing617.liu}@connect.polyu.hk, [email protected]. • X. Mei is with the Department of Management and Marketing, The Hong Kong Polytechnic University. E-mail: [email protected]. • Y. Wang is with National University of Defense Technology. E-mail: [email protected]. score between users and items (i.e., the probability that the user would like the item) [5]. More specifically, collaborative behaviors between users and items have been leveraged to design various recommendation models, which can be further used to learn the representations of users and items [6], [7]. In addition, textual side information about users and items contains rich knowledge that can assist in the calculation of the matching scores, providing great opportunities to understand user preferences for advancing recommender systems [8].
2307.02046#4
Recommender Systems in the Era of Large Language Models (LLMs)
With the prosperity of e-commerce and web applications, Recommender Systems (RecSys) have become an important component of our daily life, providing personalized suggestions that cater to user preferences. While Deep Neural Networks (DNNs) have made significant advancements in enhancing recommender systems by modeling user-item interactions and incorporating textual side information, DNN-based methods still face limitations, such as difficulties in understanding users' interests and capturing textual side information, inabilities in generalizing to various recommendation scenarios and reasoning on their predictions, etc. Meanwhile, the emergence of Large Language Models (LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI), due to their remarkable abilities in fundamental responsibilities of language understanding and generation, as well as impressive generalization and reasoning capabilities. As a result, recent studies have attempted to harness the power of LLMs to enhance recommender systems. Given the rapid evolution of this research direction in recommender systems, there is a pressing need for a systematic overview that summarizes existing LLM-empowered recommender systems, to provide researchers in relevant fields with an in-depth understanding. Therefore, in this paper, we conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting. More specifically, we first introduce representative methods to harness the power of LLMs (as a feature encoder) for learning representations of users and items. Then, we review recent techniques of LLMs for enhancing recommender systems from three paradigms, namely pre-training, fine-tuning, and prompting. Finally, we comprehensively discuss future directions in this emerging field.
http://arxiv.org/pdf/2307.02046
Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li
cs.IR, cs.AI, cs.CL
16 pages, 5 figures
null
cs.IR
20230705
20230805
[ { "id": "2201.11903" }, { "id": "2305.05973" }, { "id": "2010.15980" }, { "id": "2307.09688" }, { "id": "2307.07171" }, { "id": "2305.15498" }, { "id": "2305.02182" }, { "id": "2305.12090" }, { "id": "2305.07609" }, { "id": "2304.03516" }, { "id": "2303.14524" }, { "id": "2305.15673" }, { "id": "2301.00234" }, { "id": "2305.13112" }, { "id": "2307.10747" }, { "id": "2302.02591" }, { "id": "2305.15062" }, { "id": "2307.15780" }, { "id": "2303.13835" }, { "id": "2307.05722" }, { "id": "2305.07001" }, { "id": "2303.17564" }, { "id": "2305.11700" }, { "id": "2304.03879" }, { "id": "2206.08082" }, { "id": "2305.05065" }, { "id": "2305.00447" }, { "id": "2302.05729" }, { "id": "2304.10149" }, { "id": "2304.01097" }, { "id": "2306.05817" }, { "id": "2304.03153" }, { "id": "2304.04218" }, { "id": "2301.11489" }, { "id": "2305.06569" }, { "id": "2206.06190" }, { "id": "2307.02157" }, { "id": "2305.19860" }, { "id": "2305.15756" }, { "id": "2305.07633" }, { "id": "2305.16582" }, { "id": "2305.08845" }, { "id": "2307.03393" }, { "id": "2304.11116" }, { "id": "2306.06031" }, { "id": "2303.18223" }, { "id": "2305.15036" }, { "id": "2305.17812" }, { "id": "2010.01494" }, { "id": "2205.09666" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2106.00573" }, { "id": "2305.11255" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.06566" }, { "id": "2306.17256" }, { "id": "2305.06212" }, { "id": "2306.02552" }, { "id": "2305.07961" }, { "id": "2203.11171" }, { "id": "2301.12867" }, { "id": "2305.04518" }, { "id": "2305.14552" }, { "id": "2112.08633" }, { "id": "2307.14225" }, { "id": "1511.06939" }, { "id": "2012.15723" }, { "id": "2303.08896" }, { "id": "2306.06615" }, { "id": "2305.15075" }, { "id": "2305.09858" }, { "id": "2209.10117" }, { "id": "2305.06474" }, { "id": "2201.08239" }, { "id": "2302.03735" }, { "id": "2109.01652" }, { "id": "2305.07622" }, { "id": "2306.10933" } ]
2307.02053
4
ChatGPT and its successor GPT-4 have surpassed their prior state-of-the-art models on a vast majority of the benchmarking tasks and datasets. However, to preserve privacy, natively running a 175B+ sized model like GPT-3 is beyond the capabilities of most organizations, let alone individuals. This has prompted many researchers to fine-tune manageable-sized LLMs — from 7B to 30B on a diverse set of instruction examples generated by ChatGPT or GPT-4. This has birthed LLMs, such as, Alpaca [Taori et al., 2023] and VICUNA [Chiang et al., 2023] that are fine-tuned checkpoints of LLaMA [Touvron et al., 2023]. These models have attained close to ChatGPT-level performance on some specific benchmarking tasks, but overall generalization still remains elusive. Recent works like INSTRUCTEVAL [Chia et al., 2023] strongly hint that the fine-tuning datasets dictate the task-specific performances. For instance, it has been observed that FLAN-T5 — a T5 checkpoint fine-tuned on FLAN Collection instruction dataset —
2307.02053#4
Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning
Recently, the release of INSTRUCTEVAL has provided valuable insights into the performance of large language models (LLMs) that utilize encoder-decoder or decoder-only architecture. Interestingly, despite being introduced four years ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general problem-solving skills. This performance discrepancy can be attributed to three key factors: (1) Pre-training data, (2) Backbone architecture, and (3) Instruction dataset. In this technical report, our main focus is on investigating the impact of the third factor by leveraging VICUNA, a large language model based on LLAMA, which has undergone fine-tuning on ChatGPT conversations. To achieve this objective, we fine-tuned VICUNA using a customized instruction dataset collection called FLANMINI. This collection includes a subset of the large-scale instruction dataset known as FLAN, as well as various code-related datasets and conversational datasets derived from ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand problem-solving skills. Our experimental findings strongly indicate that the enhanced problem-solving abilities of our model, FLACUNA, are obtained through fine-tuning VICUNA on the FLAN dataset, leading to significant improvements across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly available at https://huggingface.co/declare-lab/flacuna-13b-v1.0.
http://arxiv.org/pdf/2307.02053
Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria
cs.CL
null
null
cs.CL
20230705
20230705
[ { "id": "2301.13688" }, { "id": "2106.09685" }, { "id": "2203.07814" }, { "id": "1909.09436" } ]
2307.02477
4
# Introduction The striking empirical successes of language mod- els (LMs) suggest that next-word prediction at scale may be a viable approach for distilling the knowl- edge embedded in large-scale text corpora into general-purpose interactive agents. LMs obtain im- pressive results on various NLP benchmarks (Ope- nAI, 2023; Anil et al., 2023; Anthropic, 2023; i.a.) and display surprising abilities that suggest a non- trivial understanding of the world (Bubeck et al., 2023). They have been shown to pass professional exams (Kung et al., 2023; Nori et al., 2023; Terwi- esch, 2023; i.a.), exceed state-of-the-art methods on many traditional benchmarks (Sun et al., 2023; Sobania et al., 2023; Zhang et al., 2023a; Dhingra et al., 2023; i.a.), and surpass human performance on tasks that require seemingly nontrivial reason- ing (Chowdhery et al., 2022; Hoffmann et al., 2022; Malinka et al., 2023; Guo et al., 2023; i.a.).
2307.02477#4
Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks
The impressive performance of recent language models across a wide range of tasks suggests that they possess a degree of abstract reasoning skills. Are these skills general and transferable, or specialized to specific tasks seen during pretraining? To disentangle these effects, we propose an evaluation framework based on "counterfactual" task variants that deviate from the default assumptions underlying standard tasks. Across a suite of 11 tasks, we observe nontrivial performance on the counterfactual variants, but nevertheless find that performance substantially and consistently degrades compared to the default conditions. This suggests that while current LMs may possess abstract task-solving skills to a degree, they often also rely on narrow, non-transferable procedures for task-solving. These results motivate a more careful interpretation of language model performance that teases apart these aspects of behavior.
http://arxiv.org/pdf/2307.02477
Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim
cs.CL, cs.AI
null
null
cs.CL
20230705
20230801
[]
2307.02485
4
We evaluate our framework on two extended embodied multi-agent cooperation challenges: Com- municative Watch-And-Help (C-WAH) and ThreeDWorld Multi-Agent Transport (TDW-MAT). Our experimental results indicate that cooperative embodied agents built with Large Language Models can plan, communicate, and cooperate with other embodied agents and humans to accomplish long- horizon tasks efficiently. For example, as illustrated in Figure 1, the LLM-based agent can reason about the current state and the other agent’s state and divides the labor with its partner through com- munication effectively. In particular, by harnessing the rich world knowledge and strong reasoning capability of recent Large Language Models, such as GPT-4, our method can outperform strong planning-based baselines and exhibit emergent efficient communication. In a user study, we also discover that LLM-based agents that communicate with humans in natural language can earn more trust from humans. In sum, our contribution includes: • We conducted the first systematic study on LLMs’ capacity for planning and communication in embodied multi-agent cooperation. • We introduced a novel framework that utilizes LLMs to build cooperative embodied agents, surpass- ing strong planning-based methods. • We conducted a user study to evaluate the possibility of achieving effective and trustworthy human- AI cooperation using LLMs. # 2 Related Work
2307.02485#4
Building Cooperative Embodied Agents Modularly with Large Language Models
Large Language Models (LLMs) have demonstrated impressive planning abilities in single-agent embodied tasks across various domains. However, their capacity for planning and communication in multi-agent cooperation remains unclear, even though these are crucial skills for intelligent embodied agents. In this paper, we present a novel framework that utilizes LLMs for multi-agent cooperation and tests it in various embodied environments. Our framework enables embodied agents to plan, communicate, and cooperate with other embodied agents or humans to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs, such as GPT-4, can surpass strong planning-based methods and exhibit emergent effective communication using our framework without requiring fine-tuning or few-shot prompting. We also discover that LLM-based agents that communicate in natural language can earn more trust and cooperate more effectively with humans. Our research underscores the potential of LLMs for embodied AI and lays the foundation for future research in multi-agent cooperation. Videos can be found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/.
http://arxiv.org/pdf/2307.02485
Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan
cs.AI, cs.CL, cs.CV
Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/
null
cs.AI
20230705
20230705
[ { "id": "2211.09935" }, { "id": "1712.05474" }, { "id": "2007.04954" }, { "id": "2210.04964" }, { "id": "1909.07528" }, { "id": "1903.00784" }, { "id": "1711.11017" }, { "id": "2201.11903" }, { "id": "2305.02412" }, { "id": "2212.08681" }, { "id": "2110.01517" }, { "id": "1809.00786" }, { "id": "1809.07124" }, { "id": "2303.03378" }, { "id": "2210.06849" }, { "id": "2305.05252" }, { "id": "2302.14045" }, { "id": "1810.00147" }, { "id": "2011.01975" }, { "id": "2209.07753" }, { "id": "2303.04129" }, { "id": "2301.05223" }, { "id": "2205.11916" }, { "id": "2206.08916" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2212.04088" } ]
2307.02486
4
The major challenge of scaling up sequence length is striking the right balance between the computational complexity and the model expressivity. RNN-style models are primarily imple- mented to increase the length. However, its sequential nature limits the parallelization dur- ing training, which is essential in long-sequence modeling. More recently, state space mod- els [GGR22, SWL23, FDS It can operate as a CNN during training, and transform to an efficient RNN at test time. While they perform well at long-range benchmarks [TDA 21], their performance on regular lengths is not as good as Transformers, limited mainly by the model expressivity [FPB
2307.02486#4
LongNet: Scaling Transformers to 1,000,000,000 Tokens
Scaling sequence length has become a critical demand in the era of large language models. However, existing methods struggle with either computational complexity or model expressivity, rendering the maximum sequence length restricted. To address this issue, we introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. Specifically, we propose dilated attention, which expands the attentive field exponentially as the distance grows. LongNet has significant advantages: 1) it has a linear computation complexity and a logarithm dependency between any two tokens in a sequence; 2) it can be served as a distributed trainer for extremely long sequences; 3) its dilated attention is a drop-in replacement for standard attention, which can be seamlessly integrated with the existing Transformer-based optimization. Experiments results demonstrate that LongNet yields strong performance on both long-sequence modeling and general language tasks. Our work opens up new possibilities for modeling very long sequences, e.g., treating a whole corpus or even the entire Internet as a sequence.
http://arxiv.org/pdf/2307.02486
Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei
cs.CL, cs.LG
Work in progress
null
cs.CL
20230705
20230719
[]
2307.03692
4
Ablation studies have shown that both the diver- sity and quality of the training data play a cru- cial role in model performance (Chen et al. 2023, Zhou et al. 2023). Low Training Data Instruction Tuning (LTD Tuning) suggests that task-specific models can gain 2% performance when trained on less than 0.5% of the original data. Moreover, pro- longed instruction tuning can decrease the founda- tional model knowledge (Gudibande et al. 2023) and can be seen as the out-of-distribution task for a downstream task of instruct-tuning (Kumar et al. 2022). In this study, we want to lay the foundation for instruct models research by defining the neces- sary (but not sufficient) condition for an instruct model. Let’s conduct a thought experiment.
2307.03692#4
Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning
In this paper, we introduce the Instruction Following Score (IFS), a metric that detects language models' ability to follow instructions. The metric has a dual purpose. First, IFS can be used to distinguish between base and instruct models. We benchmark publicly available base and instruct models, and show that the ratio of well formatted responses to partial and full sentences can be an effective measure between those two model classes. Secondly, the metric can be used as an early stopping criteria for instruct tuning. We compute IFS for Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models learn to follow instructions relatively early in the training process, and the further finetuning can result in changes in the underlying base model semantics. As an example of semantics change we show the objectivity of model predictions, as defined by an auxiliary metric ObjecQA. We show that in this particular case, semantic changes are the steepest when the IFS tends to plateau. We hope that decomposing instruct tuning into IFS and semantic factors starts a new trend in better controllable instruct tuning and opens possibilities for designing minimal instruct interfaces querying foundation models.
http://arxiv.org/pdf/2307.03692
Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak
cs.CL, cs.AI
null
null
cs.CL
20230705
20230705
[ { "id": "2101.00027" } ]
2307.02046
5
Due to the remarkable ability of representation learning in various fields, Deep Neural Networks (DNNs) have been widely adopted to advance recommender systems [9], [10]. DNNs demonstrate distinctive abilities in modeling user- item interactions with different architectures. For example, as particularly effective tools for sequential data, Recurrent Neural Networks (RNNs) have been adopted to capture high- order dependencies in user interaction sequences [11], [12]. Considering users’ online behaviors (e.g., chick, purchase, so- cializing) as graph-structured data, Graph Neural Networks (GNNs) have emerged as advanced representation learning techniques to learn user and item representations [1], [6], [13]. Meanwhile, DNNs have also demonstrated advantages in encoding side information. For instance, a BERT-based method is proposed to extract and utilize textual reviews from users [14]. • Z. Wen and F. Wang are with Amazon. E-mail: {zhenwen, feiww}@amazon.com. • X. Zhao is with City University of Hong Kong. E-mail: [email protected]. J. Tang is with Michigan State University. E-mail: [email protected]. # e (Corresponding authors: Wenqi Fan and Qing Li.)
2307.02046#5
Recommender Systems in the Era of Large Language Models (LLMs)
With the prosperity of e-commerce and web applications, Recommender Systems (RecSys) have become an important component of our daily life, providing personalized suggestions that cater to user preferences. While Deep Neural Networks (DNNs) have made significant advancements in enhancing recommender systems by modeling user-item interactions and incorporating textual side information, DNN-based methods still face limitations, such as difficulties in understanding users' interests and capturing textual side information, inabilities in generalizing to various recommendation scenarios and reasoning on their predictions, etc. Meanwhile, the emergence of Large Language Models (LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI), due to their remarkable abilities in fundamental responsibilities of language understanding and generation, as well as impressive generalization and reasoning capabilities. As a result, recent studies have attempted to harness the power of LLMs to enhance recommender systems. Given the rapid evolution of this research direction in recommender systems, there is a pressing need for a systematic overview that summarizes existing LLM-empowered recommender systems, to provide researchers in relevant fields with an in-depth understanding. Therefore, in this paper, we conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting. More specifically, we first introduce representative methods to harness the power of LLMs (as a feature encoder) for learning representations of users and items. Then, we review recent techniques of LLMs for enhancing recommender systems from three paradigms, namely pre-training, fine-tuning, and prompting. Finally, we comprehensively discuss future directions in this emerging field.
http://arxiv.org/pdf/2307.02046
Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li
cs.IR, cs.AI, cs.CL
16 pages, 5 figures
null
cs.IR
20230705
20230805
[ { "id": "2201.11903" }, { "id": "2305.05973" }, { "id": "2010.15980" }, { "id": "2307.09688" }, { "id": "2307.07171" }, { "id": "2305.15498" }, { "id": "2305.02182" }, { "id": "2305.12090" }, { "id": "2305.07609" }, { "id": "2304.03516" }, { "id": "2303.14524" }, { "id": "2305.15673" }, { "id": "2301.00234" }, { "id": "2305.13112" }, { "id": "2307.10747" }, { "id": "2302.02591" }, { "id": "2305.15062" }, { "id": "2307.15780" }, { "id": "2303.13835" }, { "id": "2307.05722" }, { "id": "2305.07001" }, { "id": "2303.17564" }, { "id": "2305.11700" }, { "id": "2304.03879" }, { "id": "2206.08082" }, { "id": "2305.05065" }, { "id": "2305.00447" }, { "id": "2302.05729" }, { "id": "2304.10149" }, { "id": "2304.01097" }, { "id": "2306.05817" }, { "id": "2304.03153" }, { "id": "2304.04218" }, { "id": "2301.11489" }, { "id": "2305.06569" }, { "id": "2206.06190" }, { "id": "2307.02157" }, { "id": "2305.19860" }, { "id": "2305.15756" }, { "id": "2305.07633" }, { "id": "2305.16582" }, { "id": "2305.08845" }, { "id": "2307.03393" }, { "id": "2304.11116" }, { "id": "2306.06031" }, { "id": "2303.18223" }, { "id": "2305.15036" }, { "id": "2305.17812" }, { "id": "2010.01494" }, { "id": "2205.09666" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2106.00573" }, { "id": "2305.11255" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.06566" }, { "id": "2306.17256" }, { "id": "2305.06212" }, { "id": "2306.02552" }, { "id": "2305.07961" }, { "id": "2203.11171" }, { "id": "2301.12867" }, { "id": "2305.04518" }, { "id": "2305.14552" }, { "id": "2112.08633" }, { "id": "2307.14225" }, { "id": "1511.06939" }, { "id": "2012.15723" }, { "id": "2303.08896" }, { "id": "2306.06615" }, { "id": "2305.15075" }, { "id": "2305.09858" }, { "id": "2209.10117" }, { "id": "2305.06474" }, { "id": "2201.08239" }, { "id": "2302.03735" }, { "id": "2109.01652" }, { "id": "2305.07622" }, { "id": "2306.10933" } ]
2307.02477
5
Ideally, we expect a general-purpose LM to be able to generalize not only to unseen instances of known tasks, but to new tasks. Humans, for exam- ple, can transfer their knowledge to new instances and also flexibly adapt to novel tasks (Singley and Anderson, 1989). To what extent does the perfor- mance of current LMs derive from their ability to deploy task-general reasoning skills, versus their ability to recognize and recall specific tasks seen frequently in pre-training? Past work has focused on instance-level general- ization, but this is often complicated by data con- tamination issues (Dodge et al., 2021; Magar and Schwartz, 2022; i.a.). In this work, we are inter- ested in the models’ generalizability to new task variants, which has been less systematically studied for LMs (though see Li et al. (2022), Mishra et al. (2022), and Wang et al. (2022b)).
2307.02477#5
Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks
The impressive performance of recent language models across a wide range of tasks suggests that they possess a degree of abstract reasoning skills. Are these skills general and transferable, or specialized to specific tasks seen during pretraining? To disentangle these effects, we propose an evaluation framework based on "counterfactual" task variants that deviate from the default assumptions underlying standard tasks. Across a suite of 11 tasks, we observe nontrivial performance on the counterfactual variants, but nevertheless find that performance substantially and consistently degrades compared to the default conditions. This suggests that while current LMs may possess abstract task-solving skills to a degree, they often also rely on narrow, non-transferable procedures for task-solving. These results motivate a more careful interpretation of language model performance that teases apart these aspects of behavior.
http://arxiv.org/pdf/2307.02477
Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim
cs.CL, cs.AI
null
null
cs.CL
20230705
20230801
[]
2307.02485
5
• We conducted a user study to evaluate the possibility of achieving effective and trustworthy human- AI cooperation using LLMs. # 2 Related Work Multi-Agent Cooperation and Communication Plenty works have explored various aspects of multi-agent cooperation and communication. Some works provide various platforms for multi-agent tasks [27, 38, 43, 17, 39, 45, 2, 3]. Other works focused on methods that improves communication efficiency [21, 8, 46], cooperation in visually rich domains [18], or grounding communications in environments [33]. For embodied intelligence, [35] and [36] explored the social perception of the agents during their cooperation. These works usually disable communication [17, 39, 7, 35, 36], use continuous vectors [21, 8] for communication, or use discrete symbols [27, 20, 18, 33, 38] for communication. In contrast, our work stands apart by employing large language models for communication, introducing a novel perspective that utilizes natural language to enhance multi-agent cooperation and communication.
2307.02485#5
Building Cooperative Embodied Agents Modularly with Large Language Models
Large Language Models (LLMs) have demonstrated impressive planning abilities in single-agent embodied tasks across various domains. However, their capacity for planning and communication in multi-agent cooperation remains unclear, even though these are crucial skills for intelligent embodied agents. In this paper, we present a novel framework that utilizes LLMs for multi-agent cooperation and tests it in various embodied environments. Our framework enables embodied agents to plan, communicate, and cooperate with other embodied agents or humans to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs, such as GPT-4, can surpass strong planning-based methods and exhibit emergent effective communication using our framework without requiring fine-tuning or few-shot prompting. We also discover that LLM-based agents that communicate in natural language can earn more trust and cooperate more effectively with humans. Our research underscores the potential of LLMs for embodied AI and lays the foundation for future research in multi-agent cooperation. Videos can be found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/.
http://arxiv.org/pdf/2307.02485
Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan
cs.AI, cs.CL, cs.CV
Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/
null
cs.AI
20230705
20230705
[ { "id": "2211.09935" }, { "id": "1712.05474" }, { "id": "2007.04954" }, { "id": "2210.04964" }, { "id": "1909.07528" }, { "id": "1903.00784" }, { "id": "1711.11017" }, { "id": "2201.11903" }, { "id": "2305.02412" }, { "id": "2212.08681" }, { "id": "2110.01517" }, { "id": "1809.00786" }, { "id": "1809.07124" }, { "id": "2303.03378" }, { "id": "2210.06849" }, { "id": "2305.05252" }, { "id": "2302.14045" }, { "id": "1810.00147" }, { "id": "2011.01975" }, { "id": "2209.07753" }, { "id": "2303.04129" }, { "id": "2301.05223" }, { "id": "2205.11916" }, { "id": "2206.08916" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2207.05608" }, { "id": "2212.04088" } ]
2307.02486
5
Another strand of scaling the sequence length is to decrease the complexity of Transformers, i.e., the quadratic complexity of self-attention. Implementing sliding windows or convolution modules over the attention is a straightforward way to make the complexity nearly linear. Nevertheless, this sacrifices the ability to recall the early tokens, forgetting the prompts at the very beginning of the sequence. Sparse attention reduces the computation by sparsifying the attention matrix, preserving the possibility of recalling long-distant information. For example, [CGRS19] obtains O(N N d) time complexity with a fixed sparse pattern. Besides the heuristic patterns [ZGD 20, BPC20], the learnable 23]. There are also some other effi- patterns prove to be useful for sparse attention [KKL20, ALdJ + 20], kernel-based cient Transformer-based variants, including low-rank attention [WLK + + 21, MKW methods [KVPF20, CLD 21], + + recurrent models [DYY 23]. Yet, none has been scaled to 1 billion tokens (see Figure 1). Method Recurrent Vanilla Attention Sparse Attention Computation Complexity O(N d2 ) O(N 2d) √ N d) O(N Dilated Attention (This Work) O(N d)
2307.02486#5
LongNet: Scaling Transformers to 1,000,000,000 Tokens
Scaling sequence length has become a critical demand in the era of large language models. However, existing methods struggle with either computational complexity or model expressivity, rendering the maximum sequence length restricted. To address this issue, we introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. Specifically, we propose dilated attention, which expands the attentive field exponentially as the distance grows. LongNet has significant advantages: 1) it has a linear computation complexity and a logarithm dependency between any two tokens in a sequence; 2) it can be served as a distributed trainer for extremely long sequences; 3) its dilated attention is a drop-in replacement for standard attention, which can be seamlessly integrated with the existing Transformer-based optimization. Experiments results demonstrate that LongNet yields strong performance on both long-sequence modeling and general language tasks. Our work opens up new possibilities for modeling very long sequences, e.g., treating a whole corpus or even the entire Internet as a sequence.
http://arxiv.org/pdf/2307.02486
Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei
cs.CL, cs.LG
Work in progress
null
cs.CL
20230705
20230719
[]
2307.03692
5
Let’s put all models behind a closed API (a recent equivalent of a black box). Is the model instruct- tuned or not? Knowledge benchmarks could be similar for vanilla and instruct models for LTD tuning. Skills tests would highly depend on the model size, which is not known. The simplest way of solving the riddle would be to . . . chat with the model and judge the tone of the response. For a vanilla model, we expect a next prediction word attempt, whereas for instruct models, we ex- pect them to follow instructions. We introduce a metric that captures this tone difference - Instruct Following Score (IFS). We call this problem a "tone alignment" issue. The IFS is defined as a ratio of "answer-like" responses to "continuation-like" responses on a predefined set of instructions, where class of a response is determined by a binary classifier. We benchmark publicly available base and in- struct models, and show that the ratio of well formatted responses to partial and full sentences can be an effective measure between vanilla and instruct following models. Moreover, we calcu- late IFS for SFT for 7B and 13B LLaMA models, in the hope of finding a stopping criterion for a minimal instruct tuning.
2307.03692#5
Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning
In this paper, we introduce the Instruction Following Score (IFS), a metric that detects language models' ability to follow instructions. The metric has a dual purpose. First, IFS can be used to distinguish between base and instruct models. We benchmark publicly available base and instruct models, and show that the ratio of well formatted responses to partial and full sentences can be an effective measure between those two model classes. Secondly, the metric can be used as an early stopping criteria for instruct tuning. We compute IFS for Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models learn to follow instructions relatively early in the training process, and the further finetuning can result in changes in the underlying base model semantics. As an example of semantics change we show the objectivity of model predictions, as defined by an auxiliary metric ObjecQA. We show that in this particular case, semantic changes are the steepest when the IFS tends to plateau. We hope that decomposing instruct tuning into IFS and semantic factors starts a new trend in better controllable instruct tuning and opens possibilities for designing minimal instruct interfaces querying foundation models.
http://arxiv.org/pdf/2307.03692
Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak
cs.CL, cs.AI
null
null
cs.CL
20230705
20230705
[ { "id": "2101.00027" } ]
2307.02053
6
To this end, we first sample a 1M-sized instruction dataset from the 15M-sized FLAN Collection dataset [Longpre et al., 2023] and combined it with several other datasets comprising coding tasks and ChatGPT/GPT-4 distilled conversations. The resulting smaller dataset, FLAN-MINI, is then cast into the conversational format of VICUNA. To ensure a reasonable computational cost for the fine-tuning process, we retrofit LoRA [Hu et al., 2021] adapter into the LLaMA [Touvron et al., 2023] decoder-transformer of VICUNA. Following a parameter-efficient LoRA fine-tuning of the VICUNA checkpoint on FLAN-MINI, we obtain FLACUNA. As expected, FLACUNA outperforms VICUNA by a substantial margin on most benchmark datasets, especially for reasoning-intensive tasks. However, the performance of FLACUNA still remains below FLAN-T5 on the same reasoning benchmarks. This could be attributed to the 15-times smaller dataset of the instruction dataset which may contain less diverse samples. Furthermore, full fine-tuning of VICUNA may narrow the gap with FLAN-T5. This work overall has the following contributions:
2307.02053#6
Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning
Recently, the release of INSTRUCTEVAL has provided valuable insights into the performance of large language models (LLMs) that utilize encoder-decoder or decoder-only architecture. Interestingly, despite being introduced four years ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general problem-solving skills. This performance discrepancy can be attributed to three key factors: (1) Pre-training data, (2) Backbone architecture, and (3) Instruction dataset. In this technical report, our main focus is on investigating the impact of the third factor by leveraging VICUNA, a large language model based on LLAMA, which has undergone fine-tuning on ChatGPT conversations. To achieve this objective, we fine-tuned VICUNA using a customized instruction dataset collection called FLANMINI. This collection includes a subset of the large-scale instruction dataset known as FLAN, as well as various code-related datasets and conversational datasets derived from ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand problem-solving skills. Our experimental findings strongly indicate that the enhanced problem-solving abilities of our model, FLACUNA, are obtained through fine-tuning VICUNA on the FLAN dataset, leading to significant improvements across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly available at https://huggingface.co/declare-lab/flacuna-13b-v1.0.
http://arxiv.org/pdf/2307.02053
Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria
cs.CL
null
null
cs.CL
20230705
20230705
[ { "id": "2301.13688" }, { "id": "2106.09685" }, { "id": "2203.07814" }, { "id": "1909.09436" } ]