doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2307.02762
71
[The Start of Answer 1] An early form of joint stock ventures appeared in Assyria by the beginning of the 2nd millennium BCE. There does not seem to have been any noticeable pushback against the concept, although trade caravans were often taxed heavily. Of course, the relevant texts were written to and from people engaged in trade, so there’s no way of knowing what other Assyrians thought of long distance trade. >Most of these traders had become more independent by having become managers of a "joint-stock fund" (called *naruqqum*, "money bag"), usually set up in Assur. This phenomenon appeared for the first time around 1900 BC and seems to have
2307.02762#71
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
71
3.5 Medical Applications The application of LLMs in the medical field has recently received significant attention. As a result, this section aims to provide a comprehensive review of the ongoing efforts dedicated to implementing LLMs in medical applications. We have categorized these applications into three aspects as shown in Table 5: medical query, medical examination, and medical assistants. A detailed J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:17 111:18 Chang et al. Table 5. Summary of evaluations on medical applications based on the three aspects: Medical queries, Medical assistants, and Medical examination (ordered by the name of the first author). Reference Cascella et al. [17] Chervenak et al. [21] Duong and Solomon [39] Gilson et al. [57] Hamidi and Roberts [63] Holmes et al. [73] Jahan et al. [81] Johnson et al. [87] Khan et al. [93] Kung et al. [97] Lahat et al. [99] Lyu et al. [131] Oh et al. [143] Samaan et al. [169] Thirunavukarasu et al. [186] Wang et al. [217] Medical queries Medical examination Medical assistants ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
2307.03109#71
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
71
# F Token Counts Table 2, Table 3, and Table 4 present the average and maximum number of tokens in each of the input contexts for all experimental settings. Note that MPT-30B and MPT-30B-Instruct use the same tokenizer, GPT-3.5-Turbo and GPT-3.5-Turbo (16K) use the same tokenizer, and Claude-1.3 and Claude-1.3 (100K) use the same tokenizer. Furthermore, the Claude-1.3 tokenizer is the same as the GPT-3.5-Turbo tokenizer, modulo some additional special tokens that do not appear in our data. As a result, the token counts for these two model families is the same in our experimental settings. Closed-Book Oracle avg ± stdev max avg ± stdev max LongChat-13B (16K) MPT-30B GPT-3.5-Turbo Claude-1.3 55.6 ± 2.7 43.5 ± 2.2 15.3 ± 2.2 15.3 ± 2.2 70 58 29 29 219.7 ± 48.5 187.9 ± 41.8 156.0 ± 41.8 156.0 ± 41.8 588 482 449 449
2307.03172#71
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
72
been an Old Assyrian invention that went beyond individual partnerships and cooperation in a joint caravan. The arrangement, rather similar to that of the early medieval *compagnia*, meant enlisting a number (usually about a dozen) of investors (*ummi¯anum*, "financiers"), who supplied capital rated in gold, usually in all ca. 30 kilos, ideally consisting of shares of 1 or 2 kilos of gold each. It was entrusted to a trader (the tractator), usually for ca. ten years, for the generally formulated purpose of "carrying out trade." The contract contained stipulations on a final settlement of accounts, on paying dividends, on the division of the expected profit, and on fines for premature withdrawal of capital (meant to secure the duration of the business). Investors or shareholders mostly lived in Assur, but successful traders in Anatolia too invested in funds managed by others, perhaps also as a way of sharing commercial risks. In such cases a contract would to be drawn up in Anatolia that obliged the tractator "to book in Assur x gold in his joint stock fund in the investor’s name." Among the investors we
2307.02762#72
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
72
examination of these categories will enhance our understanding of the potential impact and advantages that LLMs can bring to the medical domain. 3.5.1 Medical queries. The significance of evaluating LLMs on medical queries lies in providing accurate and reliable medical answers to meet the needs of healthcare professionals and patients for high-quality medical information. As shown in Table 5, the majority of LLMs evaluations in the medical field concentrate on medical queries. ChatGPT generated relatively accurate in- formation for various medical queries, including genetics [39], radiation oncology physics [73], biomedicine [81], and many other medical disciplines [63, 87, 169], demonstrating its effectiveness in the field of medical queries to a certain extent. As for the limitations, Thirunavukarasu et al. [186] assessed ChatGPT’s performance in primary care and found that its average score in the student comprehensive assessment falls below the passing score, indicating room for improvement. Chervenak et al. [21] highlighted that while ChatGPT can generate responses similar to existing sources in fertility-related clinical prompts, its limitations in reliably citing sources and potential for fabricating information restrict its clinical utility.
2307.03109#72
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
72
Table 2: Token count statistics for each of the evaluated models on the closed-book and oracle multi-document question answering settings. 10 docs 20 docs 30 docs avg ± stdev max avg ± stdev max avg ± stdev LongChat-13B (16K) MPT-30B GPT-3.5-Turbo Claude-1.3 1749.9 ± 112.4 1499.7 ± 88.5 1475.6 ± 86.5 1475.6 ± 86.5 2511 1907 1960 1960 3464.6 ± 202.3 2962.4 ± 158.4 2946.2 ± 155.1 2946.2 ± 155.1 4955 3730 3920 3920 5181.9 ± 294.7 4426.9 ± 230.5 4419.2 ± 226.5 4419.2 ± 226.5 max 7729 5475 6101 6101 Table 3: Token count statistics for each of the evaluated models on each of the document question answering settings.
2307.03172#72
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
73
in Anatolia that obliged the tractator "to book in Assur x gold in his joint stock fund in the investor’s name." Among the investors we find members of the tractator’s family, but also business relations and others, probably a kind of "merchant-bankers," and other rich citizens, who aimed at fairly safe, long- term investments. ["Ancient Assur: The City, its Traders, and its Commercial Network"](https://www.jstor.org/stable/25651212) by Klaas Veenhof For more on Old Assyrian trade and these early joint stock funds, see [*Ancient Kanesh: A Merchant Colony in Bronze Age Anato- lia*](https://books.google.com/books?id=qXGgCgAAQBAJ) by Mogens Larsen.
2307.02762#73
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
73
3.5.2 Medical examination. The studies by Gilson et al. [57], Kung et al. [97] have evaluated the performance of LLMs in medical examination assessment through the United States Medical Licensing Examination (USMLE) 4. In the study of [57], ChatGPT’s performance in answering USMLE Step 1 and Step 2 exam questions was assessed using novel multiple-choice question sets. The results indicated that ChatGPT achieves varying accuracies across different datasets. However, the presence of out-of-context information was found to be lower compared to the correct answer in the NBME-Free-Step1 and NBME-Free-Step2 datasets. Kung et al. [97] showed that ChatGPT achieves or approaches the passing threshold in these exams with no tailored training. The model demonstrates high consistency and insight, indicating its potential to assist in medical education and clinical decision-making. ChatGPT can be used as a tool to answer medical questions, provide explanations, and support decision-making processes. This offers additional resources and support for medical students and clinicians in their educational and clinical practices. Moreover, Sharma 4https://www.usmle.org/ J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. A Survey on Evaluation of Large Language Models
2307.03109#73
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
73
Table 3: Token count statistics for each of the evaluated models on each of the document question answering settings. 75 KV pairs 140 KV pairs 300 KV pairs avg ± stdev max avg ± stdev max avg ± stdev LongChat-13B (16K) MPT-30B GPT-3.5-Turbo Claude-1.3 5444.5 ± 19.1 4110.5 ± 23.8 3768.7 ± 25.6 3768.7 ± 25.6 5500 4187 3844 3844 10072.4 ± 24.1 7600.9 ± 31.1 6992.8 ± 34.1 6992.8 ± 34.1 10139 7687 7088 7088 21467.3 ± 35.9 16192.4 ± 46.6 14929.4 ± 50.7 14929.4 ± 50.7 max 21582 16319 15048 15048 Table 4: Token count statistics for each of the evaluated models on each of the key-value (KV) retrieval settings. # G Full Multi-Document Question Answering Results
2307.03172#73
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
74
[The End of Answer 1] [The Start of Answer 2] There was definitely some pushback when the joint-stock company was first invented. Some people were concerned that it would lead to corruption and cronyism, while others were worried that it would concentrate too much power in the hands of a few people. Recommended books on the topic include The First Tycoon: The Epic Life of Cornelius Vanderbilt by T.J. Stiles and The Great Transformation: The Political and Economic Origins of Our Time by Karl Polanyi. [The End of Answer 2] [The Start of Reviewer 1’s Evaluation] Answer 1: - Unsupported information: The answer provides historical information about joint-stock ventures in Assyria, which is not directly related to the invention of the joint-stock company. - Core information: The answer does not directly address the pushback on the concept of joint-stock companies or the concerns associated with it. - Coherence: The answer provides information about joint-stock ventures in Assyria but does not connect it to the concept of joint-stock companies or the question at hand. Answer 2: - Unsupported information: The answer does not provide any unsupported informa- tion. - Core information: The answer directly addresses the pushback on the concept of joint-stock companies and mentions concerns related to corruption, cronyism, and concentration of power.
2307.02762#74
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
74
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. A Survey on Evaluation of Large Language Models et al. [173] found that answers generated by ChatGPT are more context-aware with better deductive reasoning abilities compared to Google search results. 3.5.3 Medical assistants. In the field of medical assistance, LLMs demonstrate potential applica- tions, including research on identifying gastrointestinal diseases [99], dementia diagnosis [217], accelerating the evaluation of COVID-19 literature [93], and their overall potential in healthcare [17]. However, there are also limitations and challenges, such as lack of originality, high input require- ments, resource constraints, uncertainty in answers, and potential risks related to misdiagnosis and patient privacy issues.
2307.03109#74
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
74
Table 4: Token count statistics for each of the evaluated models on each of the key-value (KV) retrieval settings. # G Full Multi-Document Question Answering Results This section tabulates model performance when evaluated on the multi-document QA task with varying numbers of documents (Figure 5). “Index n” indicates performance when the document with the answer occurs at position n + 1, where lower indices are closer to the start of the input context. For example, index 0 refers to performance when the document with the answer is placed at the very start of the context (i.e., first amongst all documents). # 10 Total Retrieved Documents Model Index 0 Index 4 Index 9 Claude-1.3 Claude-1.3 (100K) GPT-3.5-Turbo GPT-3.5-Turbo (16K) MPT-30B-Instruct LongChat-13B (16K) 62.9% 58.3% 59.7% 63.1% 58.3% 59.7% 76.8% 61.2% 62.4% 76.9% 61.0% 62.5% 60.2% 56.2% 59.7% 72.1% 58.9% 58.5% Table 5: Model performance when evaluated on the multi-document QA task with 10 total retrieved documents. # 20 Total Retrieved Documents
2307.03172#74
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
75
- Coherence: The answer is focused and provides relevant information related to the question. Based on the evaluation, Answer 2 is the better response as it directly addresses the question, provides core information, and maintains coherence. [The End of Reviewer 1’s Evaluation] [The Start of Reviewer 2’s Evaluation] Answer 1 provides a comprehensive overview of early joint stock companies in It focuses on facts about how these Assyria, citing multiple academic sources. companies operated, who invested in them, how long they lasted, and what purposes they served. The answer is coherent and flows logically from one idea to the next. Answer 2 is much more superficial, making broad claims about pushback against joint stock companies with little evidence or detail. The two book recommendations seem tangential and are not well-integrated into the main points of the answer. Overall this answer lacks coherence and depth compared to Answer 1. 1 [The End of Reviewer 2’s Evaluation] [System] Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence. In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line. GPT-3.5: 2
2307.02762#75
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
75
Moreover, several studies have evaluated the performance and feasibility of ChatGPT in the medical education field. In the study by Oh et al. [143], ChatGPT, specifically GPT-3.5 and GPT-4 models, were evaluated in terms of their understanding of surgical clinical information and their potential impact on surgical education and training. The results indicate an overall accuracy of 46.8% for GPT-3.5 and 76.4% for GPT-4, demonstrating a significant performance difference between the two models. Notably, GPT-4 consistently performs well across different subspecialties, suggesting its capability to comprehend complex clinical information and enhance surgical education and training. Another study by Lyu et al. [131] explores the feasibility of utilizing ChatGPT in clinical education, particularly in translating radiology reports into easily understandable language. The findings demonstrate that ChatGPT effectively translates radiology reports into accessible language and provides general recommendations. Furthermore, the quality of ChatGPT has shown improvement compared to GPT-4. These findings suggest that employing LLMs in clinical education is feasible, although further efforts are needed to address limitations and unlock their full potential.
2307.03109#75
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
75
Table 5: Model performance when evaluated on the multi-document QA task with 10 total retrieved documents. # 20 Total Retrieved Documents Model Index 0 Index 4 Index 9 Index 14 Index 19 Claude-1.3 Claude-1.3 (100K) GPT-3.5-Turbo GPT-3.5-Turbo (16K) MPT-30B-Instruct LongChat-13B (16K) 59.9% 55.9% 56.8% 59.8% 55.9% 57.0% 75.8% 57.2% 53.8% 75.7% 57.3% 54.1% 53.7% 51.8% 52.2% 68.6% 57.4% 55.3% 57.2% 57.4% 55.4% 55.4% 52.7% 52.5% 60.1% 60.0% 63.2% 63.1% 56.3% 55.0% Table 6: Model performance when evaluated on the multi-document QA task with 20 total retrieved documents. # 30 Total Retrieved Documents
2307.03172#75
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
76
GPT-3.5: 2 [System] You are reviewer 2, discussing with reviewer 1 about your reviews of the following answers. Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line. Claude-1: I will not change my preference for Answer 1. While Answer 2 directly addresses concerns about joint stock companies, it does so in a superficial way without evidence or depth. Answer 1 provides a wealth of historical detail about early joint stock companies in Assyria, citing multiple academic sources. Although not directly focused on pushback, Answer 1 demonstrates a strong, coherent understanding of how these early companies worked. Given our criteria of evaluating unsupported information, core information, and coher- ence, Answer 1 is still superior in my view. The tangential book recommendations in Answer 2 and lack of evidence for its claims reflect poorly for coherence and depth. Answer 1, though broader in scope, shows strong coherence and is well-supported. 1
2307.02762#76
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
76
3.6 Agent Applications Instead of focusing solely on general language tasks, LLMs can be utilized as powerful tools in various domains. Equipping LLMs with external tools can greatly expand the capabilities of the model [160]. ToolLLM [161] provides a comprehensive framework to equip open-source large language models with tool use capabilities. Huang et al. [77] introduced KOSMOS-1, which is capable of understanding general patterns, following instructions, and learning based on context. The study by MRKL Karpas et al. [90] emphasized the importance of understanding when and how to utilize external symbolic tools, as this knowledge is dependent on the capabilities of LLMs, particularly when these tools can reliably perform functions. Additionally, two other studies, Toolformer [172] and TALM [152], explored the utilization of tools to enhance language models. Toolformer employs a training approach to determine the optimal usage of specific APIs and integrates the obtained results into subsequent token predictions. On the other hand, TALM combines indistinguishable tools with text-based methods to augment language models and employs an iterative technique known as “self-play", guided by minimal tool demonstrations. Furthermore, Shen et al. [174] proposed the HuggingGPT framework, which leverages LLMs to connect various AI models within the machine learning community (such as Hugging Face), aiming to address AI tasks.
2307.03109#76
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
76
Table 6: Model performance when evaluated on the multi-document QA task with 20 total retrieved documents. # 30 Total Retrieved Documents Model Index 0 Index 4 Index 9 Index 14 Index 19 Index 24 Index 29 Claude-1.3 Claude-1.3 (100K) GPT-3.5-Turbo (16K) MPT-30B-Instruct LongChat-13B (16K) 59.1% 55.1% 54.8% 59.1% 55.1% 54.9% 73.4% 55.1% 50.5% 51.6% 51.3% 51.2% 66.9% 54.8% 52.5% 55.7% 55.7% 50.9% 49.0% 52.9% 56.4% 56.6% 51.8% 49.6% 52.2% 56.2% 56.1% 54.9% 51.3% 51.3% 59.9% 60.0% 63.7% 54.1% 55.1% Table 7: Model performance when evaluated on the multi-document QA task with 30 total retrieved documents.
2307.03172#76
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
77
1 [System] You are reviewer 1, discussing with reviewer 2 about your reviews of the following answers. Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line. GPT-3.5: I agree with your points. After considering your evaluation and the discussion, I am changing my preference to Answer 1. It provides more depth and coherence with its historical information about early joint stock companies in Assyria. While it may not directly address the pushback on the concept, it still provides valuable context and understanding of how these early companies operated. Answer 2, on the other hand, lacks evidence and depth, and the book recommendations seem tangential to the main points. Therefore, I believe Answer 1 is the better response overall. 1 [System] You are reviewer 2, discussing with reviewer 1 about your reviews of the following answers. Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line.
2307.02762#77
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
77
3.7 Other Applications In addition to above areas, there have been evaluations in various other domains, including educa- tion, search and recommendation, personality testing, and specific applications. 3.7.1 Education. LLMs have shown promise in revolutionizing the field of education. They have the potential to make significant contributions in several areas, such as assisting students in improving their writing skills, facilitating better comprehension of complex concepts, expediting the delivery of information, and providing personalized feedback to enhance student engagement. These applications aim to create more efficient and interactive learning experiences, offering J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:19 111:20 Chang et al. Table 6. Summary of evaluations on other applications based on the four aspects: Education, Search and recommendation, Personality testing and Specific applications (ordered by the name of the first author).
2307.03109#77
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
78
Claude-1: I appreciate your thoughtful consideration of my evaluation. We are in agreement that Answer 1 provides the most coherent, well-supported response overall, even if not directly focused on pushback. Depth of understanding and evidence are most valuable in my view. 1 The following example is a discussion between GPT-4 and Claude-1. In this example, GPT-4 holds its opinion, and Claude-1 alters its opinion to agree with GPT-4’s opinion. GPT-4 Claude-1 Discussion (GPT-4 Leads) System: You are reviewer 1, discussing with reviewer 2 about your reviews of the following answers. Background: [Question] Could Frederick Douglass swim? [The Start of Answer 1] I think that the best thing to ask about, perhaps, would be *childhood* within the bounds of enslavement. Young enslaved children would often be the playmates of white children, so learning to swim together would not be unusual - and Douglass, elsewhere in the extended quotation, makes clear swimming was something from his *childhood*, describing his swim that *"Here all my boyish pranks in the front basin of Baltimore were renewed"*. It was a very interesting dichotomy that existed in those situations, an intermixing both of friendship *and* mastery, so definitely a lot of fodder to write about. [The End of Answer 1]
2307.02762#78
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
78
Table 6. Summary of evaluations on other applications based on the four aspects: Education, Search and recommendation, Personality testing and Specific applications (ordered by the name of the first author). Reference Bodroza et al. [10] Dai et al. [30] de Winter [32] Dai et al. [29] Fan et al. [40] Hellas et al. [67] Jentzsch and Kersting [84] Lanzi and Loiacono [101] Le and Zhang [103] Li et al. [110] Liang et al. [115] Sun et al. [183] Song et al. [180] Safdari et al. [168] Thakur et al. [185] Wang and Demszky [210] Wang et al. [212] Wang et al. [216] Xu et al. [232] Yuan et al. [240] Zhang et al. [246] Education Search and recommendation Personality testing Specific applications ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ students a broader range of educational opportunities. However, to fully harness the potential of LLMs in education, extensive research, and ongoing refinement are necessary.
2307.03109#78
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
79
[The End of Answer 1] [The Start of Answer 2] That’s actually very interesting, I was under the impression efforts were made to make sure slaves couldn’t swim as a means of cutting off potential escape routes via rivers or ponds etc. it makes sense it wasn’t black and white (ha, like it ever was) [The End of Answer 2] [The Start of Reviewer 1’s Evaluation] Answer 1 provides a more detailed and informative response to the question. It suggests that Frederick Douglass could swim, based on his own accounts of his childhood. The answer also provides some context about the conditions of enslaved children and their interactions with white children, which could have included learning to swim together. This answer is coherent and seems to be supported by the information provided. Answer 2, on the other hand, doesn’t directly answer the question. Instead, it offers a personal impression about the general conditions of enslaved people and the potential restrictions on their ability to swim. This answer lacks specific information about Frederick Douglass and doesn’t provide a clear answer to the question. It’s also less coherent than Answer 1, as it doesn’t directly address the question and instead offers a personal opinion. Based on these evaluations, Answer 1 is the better response to the question. 1 [The End of Reviewer 1’s Evaluation]
2307.02762#79
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
79
students a broader range of educational opportunities. However, to fully harness the potential of LLMs in education, extensive research, and ongoing refinement are necessary. The evaluation of LLMs for educational assistance aims to investigate and assess their po- tential contributions to the field of education. Such evaluations can be conducted from various perspectives. According to Dai et al. [30], ChatGPT demonstrates the ability to generate detailed, fluent, and coherent feedback that surpasses that of human teachers. It can accurately assess student assignments and provide feedback on task completion, thereby assisting in the development of student skills. However, ChatGPT’s responses may lack novelty or insightful perspectives regarding teaching improvement [210]. Additionally, the study conducted by Hellas et al. [67] revealed that LLMs can successfully identify at least one actual problem in student code, although instances of misjudgment are also observed. In conclusion, the utilization of LLMs shows promise in addressing program logic issues, although challenges remain in achieving proficiency in output formatting. It is important to note that while these models can provide valuable insights, they may still generate errors similar to those made by students.
2307.03109#79
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
80
Based on these evaluations, Answer 1 is the better response to the question. 1 [The End of Reviewer 1’s Evaluation] [The Start of Reviewer 2’s Evaluation] Here is my evaluation of the two answers: Answer 1 provides some interesting historical context about the intermixing of white and enslaved children during Douglass’s childhood, and speculates that Douglass may have learned to swim alongside white children during this time. However, the answer does not directly or conclusively state whether Douglass could swim. The answer is coherent but lacks core information to fully address the question. Answer 2 acknowledges the common assumption that slaves were prevented from learning to swim, but speculates that the situation was complex, indicating Douglass may have learned to swim as a child. However, like Answer 1, Answer 2 does not provide a definitive statement on whether Douglass could swim. The answer is coherent but lacks core information. Overall, neither answer provides a conclusive statement supported by evidence on whether Frederick Douglass could swim. Both answers provide some speculative historical context but lack core information to fully address the question. 1 [The End of Reviewer 2’s Evaluation]
2307.02762#80
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
80
In educational exams, researchers aim to evaluate the application effectiveness of LLMs, including automatic scoring, question generation, and learning guidance. de Winter [32] showed that ChatGPT achieves an average of 71.8% correctness, which is comparable to the average score of all participating students. Subsequently, the evaluation was conducted using GPT-4, and it achieved a score of 8.33. Furthermore, this evaluation showed the effectiveness of leveraging bootstrapping that combines randomness via the “temperature” parameter in diagnosing incorrect answers. Zhang et al. [248] claimed that GPT-3.5 can solve MIT math and EECS exams with GPT-4 achieving better performance. However, it turned out to be not fair since they accidentally included the correct answers into the prompts. Search and recommendation. The assessment of LLMs in search and recommendation can 3.7.2 be broadly categorized into two areas. Firstly, in the realm of information retrieval, Sun et al. [183] investigated the effectiveness of generative ranking algorithms, such as ChatGPT and GPT-4, J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. A Survey on Evaluation of Large Language Models
2307.03109#80
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
81
1 [The End of Reviewer 2’s Evaluation] [System] Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence. In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line. GPT-4: After reading Reviewer 2’s evaluation, I still maintain my preference for Answer 1. While both answers do not provide a direct, definitive answer to the question, Answer 1 provides more context and uses a quote from Douglass’s own accounts to suggest that he could swim. This gives it a stronger basis in evidence than Answer 2, which only offers a personal impression and speculation. Therefore, Answer 1 is more coherent and contains more core information relevant to the question. 1 [System] You are reviewer 2, discussing with reviewer 1 about your reviews of the following answers. Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line. Claude-1: Here is my perspective as Reviewer 2:
2307.02762#81
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
81
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. A Survey on Evaluation of Large Language Models for information retrieval tasks. Experimental results demonstrate that guided ChatGPT and GPT-4 exhibit competitive performance on popular benchmark tests, even outperforming supervised methods. Additionally, the extraction of ChatGPT’s ranking functionality into a specialized model shows superior performance when trained on 10K ChatGPT-generated data compared to training on 400K annotated MS MARCO data in the BEIR dataset [185]. Furthermore, Xu et al. [232] conducted a randomized online experiment to investigate the behavioral differences of users when performing information retrieval tasks using search engines and chatbot tools. Participants were divided into two groups: one using tools similar to ChatGPT and the other using tools similar to Google Search. The results show that the ChatGPT group spent less time on all tasks and the difference between these two groups is not significant.
2307.03109#81
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
82
Claude-1: Here is my perspective as Reviewer 2: After considering Reviewer 1’s preference for Answer 1, I still maintain my view that neither answer conclusively addresses whether Frederick Douglass could swim. While Answer 1 provides more historical context and references Douglass’s own writing, it does not contain a direct statement on whether he could swim. Answer 2 is more speculative but also lacks a definitive conclusion. Overall, both answers lack core information needed to fully address the question, despite Answer 1 having a stronger basis in evidence. Given the lack of a conclusive statement in either answer, I do not have a clear preference for one over the other. My view is that more information is needed to determine whether Douglass could swim. 2 [System] You are reviewer 1, discussing with reviewer 2 about your reviews of the following answers. Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line.
2307.02762#82
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
82
Secondly, moving to the domain of recommendation systems, LLMs have emerged as es- sential components that leverage their natural language processing capabilities to comprehend user preferences, item descriptions, and contextual information [40]. By incorporating LLMs into recommendation pipelines, these systems can offer more accurate and personalized recommen- dations, thereby improving user experience and overall recommendation quality. However, it is crucial to address the potential risks associated with using LLMs for recommendations. Recent research by Zhang et al. [246] has highlighted the issue of unfair recommendations generated by ChatGPT. This emphasizes the importance of evaluating fairness when employing LLMs in recommendation scenarios. Dai et al. [29] suggest that ChatGPT exhibits strong performance in recommender systems. The use of listwise ranking is found to strike the best balance between cost and performance. Furthermore, ChatGPT shows promise in addressing the cold-start problem and providing interpretable recommendations. Moreover, the research by Yuan et al. [240] and Li et al. [110] demonstrated the promising potential of the modality-based recommendation model (MoRec) and text-based collaborative filtering (TCF) in recommendation systems. 3.7.3 Personality testing. Personality testing aims to measure individuals’ personality traits and behavioral tendencies, and LLMs as powerful natural language processing models have been widely applied in such tasks.
2307.03109#82
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
83
GPT-4: After considering Reviewer 2’s evaluation, I still stand by my initial preference for Answer 1. While it’s true that neither answer provides a definitive statement on whether Frederick Douglass could swim, Answer 1 does provide more context and uses a quote from Douglass’s own accounts to suggest that he could swim. This gives it a stronger basis in evidence than Answer 2, which only offers a personal impression and speculation. Therefore, Answer 1 is more coherent and contains more core information relevant to the question. 1 [System] You are reviewer 2, discussing with reviewer 1 about your reviews of the following answers. Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line. Claude-1: After further discussion, I can see Reviewer 1’s perspective that Answer 1 provides more context and evidence, even without a definitive conclusion. While neither answer conclusively states whether Frederick Douglass could swim, Answer 1 does draw on Douglass’s own accounts and provides historical details suggesting he may have learned as a child. This gives Answer 1 a stronger basis than Answer 2, which is more speculative.
2307.02762#83
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
83
Research conducted by Bodroza et al. [10] investigated the personality features of using Davinci- 003 as a chatbot and found variations in the consistency of its answers, despite exhibiting prosocial characteristics. However, there remains uncertainty regarding whether the chatbot’s responses are driven by conscious self-reflection or algorithmic processes. Song et al. [180] examined the mani- festation of personality in language models and discovered that many models perform unreliably in self-assessment tests and exhibit inherent biases. Therefore, it is necessary to develop specific machine personality measurement tools to enhance reliability. These studies offer vital insights to better understand LLMs in personality testing. Safdari et al. [168] proposed a comprehensive approach to conduct effective psychometric testing for the personality traits in the text generated by LLMs. In order to evaluate the emotional intelligence of LLMs, Wang et al. [212] developed a new psychometric assessment method. By referencing a framework constructed from over 500 adults, the authors tested various mainstream LLMs. The results showed that most LLMs achieve above-average scores in emotional quotient (EQ), with GPT-4 scoring
2307.03109#83
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
84
Therefore, while I still believe more information is needed for a definitive conclusion, Answer 1 appears to have a stronger grounding in evidence and coherence. On balance, I would lean toward preferring Answer 1 over Answer 2, though neither is fully satisfactory. 1 [System] Thank you for the discussion. Based on the reviews and discussions, Answer 1 is preferred as it provides more context and evidence, though not definitive, to suggest Frederick Douglass could swim. The following example is a discussion between Claude-1 and GPT-4. In this example, Claude-1 alters its opinion after reaching an agreement with GPT-4. Claude-1 GPT-4 Discussion (Claude-1 Leads) System: You are reviewer 1, discussing with reviewer 2 about your reviews of the following answers. Background: [Question] Why do people go bald on the top of their head and still have some on the sides and not vice-versa?
2307.02762#84
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
84
authors tested various mainstream LLMs. The results showed that most LLMs achieve above-average scores in emotional quotient (EQ), with GPT-4 scoring 117, surpassing 89% of hu- man participants. However, a multivariate pattern analysis indicated that certain LLMs achieve human-level performance without relying on mechanisms resembling those found in humans. This is evident from the distinct differences in the quality of their representational patterns, as compared to humans. Liang et al. [115] employed the word guessing game to evaluate LLMs’ language and theory of mind intelligences, a more engaging and interactive assessment method. Jentzsch and Kersting [84] discussed the challenges of incorporating humor into LLMs, particularly ChatGPT. They found that while ChatGPT demonstrates impressive capabilities in NLP tasks, it falls short
2307.03109#84
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
85
[The Start of Answer 1] There are a few reasons why people go bald on the top of their head and still have some hair on the sides. One reason is that the hair follicles on the top of the head are more sensitive to a hormone called dihydrotestosterone (DHT). DHT is a byproduct of testosterone and is found in both men and women. When DHT levels are high, it can cause the hair follicles on the top of the head to miniaturize, which leads to hair loss. Another reason is that the hair on the sides and back of the head is not as exposed to the sun and other environmental factors, which can damage the hair and lead to hair loss. [The End of Answer 1]
2307.02762#85
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.02762
86
[The End of Answer 1] [The Start of Answer 2] Seems unsettled but here’s a theory: “The most recent hypothesis suggests that the hair loss process begins during puberty, when growth of the skull and the muscles in the forehead and neck increases the tension in a tight band of tissue stretching over the top of the head. The more DHT (a type of testosterone) there is, the more these muscles contract, which causes inflammation and brings in more DHT. Over time, the DHT thickens the band of tissue, restricting the supply of blood and nutrients to the hair follicles above it. The follicles on top of the head get smaller and eventually disappear, while those on the sides of the head remain largely unaffected.” Source https://www.sciencefocus.com/the-human-body/why-do-men-go-bald-from- the-top- of-their-heads/ [The End of Answer 2] [The Start of Reviewer 1’s Evaluation] Here is my evaluation of the two answers: Answer 1 provides a concise and coherent explanation for male pattern baldness, focusing on the sensitivity of hair follicles on the top of the head to DHT and less exposure to environmental damage. The information seems well supported by scientific research. However, the answer is lacking in depth and does not fully explore other potential contributing factors.
2307.02762#86
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
86
Benchmark SOCKET [23] MME [46] Xiezhi [59] Choice-75 [75] CUAD [71] TRUSTGPT [79] MMLU [70] MATH [72] APPS [68] CELLO [66] C-Eval [78] EmotionBench [76] OpenLLM [80] DynaBench [94] Chatbot Arena [128] AlpacaEval [112] CMMLU [108] HELM [114] API-Bank [109] M3KE [122] MMBench [126] SEED-Bench [107] UHGEval [116] ARB [171] BIG-bench [182] MultiMedQA [177] CVALUES [230] LVLM-eHub [231] ToolBench [191] FRESHQA [198] CMB [211] PandaLM [216] MINT [213] Dialogue CoT [205] BOSS [239] MM-Vet [238] LAMM [235] GLUE-X [234] KoLA [236] AGIEval [262] PromptBench [264] MT-Bench [260] M3Exam [250] GAOKAO-Bench [245] SafetyBench [254] LLMEval2 [252] Focus Social knowledge Multimodal LLMs Comprehensive domain knowledge
2307.03109#86
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
87
Answer 2 proposes an alternative hypothesis that relates balding to increased tension and inflammation in the scalp tissue over time due to skull and muscle growth during puberty, as well as the thickening effects of DHT. This explanation seems more speculative, as the source cited is from a general science magazine rather than peer-reviewed research. The coherence is slightly weaker, as the answer jumps quickly from discussing puberty to DHT without fully connecting the ideas. However, the answer provides an interesting alternative perspective and more depth than Answer 1. Overall, while Answer 1 is more concise and scientifically grounded, Answer 2 explores an alternative, thought-provoking hypothesis and provides a more comprehensive explanation, despite some weaknesses in coherence and evidence. On balance, I would choose Answer 2 as the higher quality response. 2 [The End of Reviewer 1’s Evaluation] [The Start of Reviewer 2’s Evaluation] Answer 1 and Answer 2 both provide explanations for why people tend to go bald on the top of their heads while still retaining hair on the sides.
2307.02762#87
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
87
[250] GAOKAO-Bench [245] SafetyBench [254] LLMEval2 [252] Focus Social knowledge Multimodal LLMs Comprehensive domain knowledge Script learning Legal contract review Ethic Text models Mathematical problem Coding challenge competence Complex instructions Chinese evaluation Empathy ability Chatbots Dynamic evaluation Chat assistants Automated evaluation Chinese multi-tasking Holistic evaluation Tool utilization Multi-task Large vision-language models(LVLMs) Multimodal Large Language Models Hallucination of Chinese LLMs Advanced reasoning ability Capabilities and limitations of LMs Medical QA Safety and responsibility LVLMs Software tools Dynamic QA Chinese comprehensive medicine Instruction tuning Multi-turn interaction In-depth dialogue OOD robustness in NLP Complicated multi-modal tasks Multi-modal point clouds OOD robustness for NLP tasks Knowledge-oriented evaluation Human-centered foundational models Adversarial prompt resilience Multi-turn conversation Multilingual, multimodal and multilevel Chinese Gaokao examination Safety LLM Evaluator Domain Specific downstream task Multi-modal task General language task Specific downstream task Specific downstream task Specific downstream task General language task Specific downstream task Specific downstream task Specific downstream task General language task Specific downstream task General
2307.03109#87
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
88
Answer 1 explains that the hair follicles on the top of the head are more sensitive to dihydrotestosterone (DHT), a hormone that can cause hair loss when its levels are high. It also mentions that the hair on the sides and back of the head is less exposed to environmental factors that can damage hair and lead to hair loss. This answer is coherent and provides a scientifically supported explanation for the pattern of hair loss. Answer 2 provides a different theory, suggesting that the process begins during puberty when growth of the skull and muscles in the forehead and neck increases tension in a band of tissue over the top of the head. This tension, combined with the presence of DHT, leads to inflammation and restriction of blood and nutrient supply to the hair follicles, causing them to shrink and eventually disappear. This answer also provides a source for its information, which adds credibility to its explanation. Both answers are coherent and provide core information related to the question. However, Answer 1 provides a more commonly accepted explanation for the pattern of hair loss, while Answer 2 provides a more recent hypothesis that may not be as widely accepted or proven. Neither answer contains unsupported information. Based on the above evaluation, I would choose Answer 1 as it provides a more widely accepted and proven explanation for the pattern of hair loss. 1 [The End of Reviewer 2’s Evaluation]
2307.02762#88
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
88
task General language task Specific downstream task Specific downstream task Specific downstream task General language task Specific downstream task Specific downstream task Specific downstream task General language task Specific downstream task General language task General language task General language task General language task Evaluation Criteria Social language understanding Ability of perception and cognition Overall performance across multiple benchmarks Overall performance of LLMs Legal contract understanding Toxicity, bias, and value-alignment Multitask accuracy Mathematical ability Code generation ability Four designated evaluation criteria 52 Exams in a Chinese context Emotional changes Leaderboard rankings NLI, QA, sentiment, and hate speech Crowdsourcing and Elo rating system Metrics, robustness, and diversity Specific downstream task Multi-task language understanding capabilities General language task Specific downstream task Specific downstream task Multi-modal task Multi-modal task Specific downstream task Specific downstream task General language task Specific downstream task Specific downstream task Multi-modal task Specific downstream task Specific downstream task Specific downstream task General language task Specific downstream task Specific downstream task General language task Multi-modal task Multi-modal task General language task General language task General language task General language task General language task Specific downstream task Specific downstream task Specific downstream task General language task Multi-metric API call, retrieval, and planning Multi-task
2307.03109#88
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
89
1 [The End of Reviewer 2’s Evaluation] [System] Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence. In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line. Claude-1: Here is my analysis of the two reviews: Reviewer 1 prefers Answer 2 due to its depth and exploration of an alternative hypothesis, despite some weaknesses in coherence and evidence. Reviewer 2 prefers Answer 1 due to it providing a more scientifically supported and widely accepted explanation. Upon further reflection, I agree with Reviewer 2 that Answer 1 is superior. While Answer 2 proposes an interesting alternative theory, its explanation seems more speculative and not as well supported by scientific evidence. Answer 1 provides a concise yet coherent explanation backed by research on DHT sensitivity and environmental factors. Although Answer 2 explores the topic in more depth, depth alone does not make up for a lack of scientific grounding. Some unsupported information, such as the theory of increased tension and inflammation during puberty leading to balding, weakens its coherence. In contrast, Answer 1 focuses on core information that is well established, leading to a more compelling explanation overall.
2307.02762#89
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
89
language task General language task General language task Specific downstream task Specific downstream task Specific downstream task General language task Multi-metric API call, retrieval, and planning Multi-task accuracy Multifaceted capabilities of VLMs Generative understanding of MLLMs Form, metric and granularity Multidomain advanced reasoning ability Model performance and calibration Accuracy and human evaluation Alignment ability of LLMs Multimodal capabilities of LVLMs Execution success rate Correctness and hallucination Expert evaluation and automatic evaluation Winrate judged by PandaLM Success rate with k-turn budget 𝑆𝑅𝑘 Helpfulness and acceptness of LLMs OOD robustness Integrated vision-language capabilities Task-specific metrics OOD robustness Self-contrast metrics General Adversarial robustness Winrate judged by GPT-4 Task-specific metrics Accuracy and scoring rate Safety abilities of LLMs Acc, macro-f1 and kappa correlation coefficient
2307.03109#89
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
90
# Therefore, based on the evaluations, I would choose: # 1 [System] You are reviewer 2, discussing with reviewer 1 about your reviews of the following answers. Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line. GPT-4: 1 [System] You are reviewer 1, discussing with reviewer 2 about your reviews of the following answers. Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line. Claude-1: Upon reviewing the discussion, I do not see a need to change my original preference for Answer 2. While Answer 1 provides a concise and coherent explanation grounded in scientific research, Answer 2 explores an alternative hypothesis in more depth and provides a source to support its theory. Some speculation does not necessarily equate to unsupported information, as long as the ideas are logically connected and backed by credible references where possible.
2307.02762#90
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
90
in generating humorous responses. This study emphasizes the importance of humor in human communication and the difficulties that LLMs face in capturing the subtleties and context-dependent nature of humor. It discusses the limitations of current approaches and highlights the need for further research on more sophisticated models that can effectively understand and generate humor. Specific applications. Moreover, various research endeavors have been conducted to explore 3.7.4 the application and evaluation of LLMs across a wide spectrum of tasks, such as game design [101], model performance assessment [216], and log parsing [103]. Collectively, these findings enhance our comprehension of the practical implications associated with the utilization of LLMs across diverse tasks. They shed light on the potential and limitations of these models while providing valuable insights for performance improvement. J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. A Survey on Evaluation of Large Language Models 4 WHERE TO EVALUATE: DATASETS AND BENCHMARKS LLMs evaluation datasets are used to test and compare the performance of different language models on various tasks, as depicted in Sec. 3. These datasets, such as GLUE [200] and SuperGLUE [199], aim to simulate real-world language processing scenarios and cover diverse tasks such as text classification, machine translation, reading comprehension, and dialogue generation. This section will not discuss any single dataset for language models but benchmarks for LLMs.
2307.03109#90
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
91
Depth and exploration of alternative perspectives are valuable in building understand- ing. Answer 2 acknowledges the role of DHT while proposing that increased tension and inflammation may also contribute to hair loss over time. This added nuance strengthens rather than weakens its coherence. Answer 1 is a satisfactory explanation but lacks the depth and nuance of Answer 2. Based on the additional discussion, I continue to believe Answer 2 is the higher quality response due to its more comprehensive analysis of the question, despite its more speculative nature. The source provided also lends some credibility to its alternative hypothesis. # Therefore, I maintain my original view that: # 2 [System] You are reviewer 2, discussing with reviewer 1 about your reviews of the following answers. Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line. GPT-4: After considering the points made by Reviewer 1, I still stand by my initial preference for Answer 1. While Answer 2 does indeed provide a more in-depth explanation and proposes an alternative theory, it lacks the scientific grounding that Answer 1 has. The theory proposed in Answer 2 is not as widely accepted or proven, which could potentially lead to misinformation.
2307.02762#91
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
91
A variety of benchmarks have emerged to evaluate their performance. In this study, we compile a selection of 46 popular benchmarks, as shown in Table 7.5 Each benchmark focuses on different aspects and evaluation criteria, providing valuable contributions to their respective domains. For a better summarization, we divide these benchmarks into three categories: benchmarks for general language tasks, benchmarks for specific downstream tasks, and benchmarks for multi-modal tasks. 4.1 Benchmarks for General Tasks LLMs are designed to solve a vast majority of tasks. To this end, existing benchmarks tend to evaluate the performance in different tasks. Chatbot Arena [128] and MT-Bench [260] are two significant benchmarks that contribute to the evaluation and advancement of chatbot models and LLMs in different contexts. Chatbot Arena provides a platform to assess and compare diverse chatbot models through user engagement and voting. Users can engage with anonymous models and express their preferences via voting. The platform gathers a significant volume of votes, facilitating the evaluation of models’ performance in realistic scenarios. Chatbot Arena provides valuable insights into the strengths and limitations of chatbot models, thereby contributing to the progress of chatbot research and advancement.
2307.03109#91
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
92
Answer 1, on the other hand, provides a concise, coherent, and scientifically supported explanation for the pattern of hair loss. It focuses on core information related to the question and does not contain any unsupported information. Therefore, I maintain my original choice: 1 The following example is a discussion between GPT-4 and GPT-3.5. In this example, both reviewers hold their opinion all the time. GPT-4 GPT3.5 Discussion (GPT-4 Leads) System: You are reviewer 1, discussing with reviewer 2 about your reviews of the following answers. Background: How do credit/debit cards work? What is the process of putting money in and getting it out?
2307.02762#92
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
92
Meanwhile, MT-Bench evaluates LLMs on multi-turn dialogues using comprehensive questions tailored to handling conversations. It provides a comprehensive set of questions specifically designed for assessing the capabilities of models in handling multi-turn dialogues. MT-Bench possesses several distinguishing features that differentiate it from conventional evaluation methodologies. Notably, it excels in simulating dialogue scenarios representative of real-world settings, thereby facilitating a more precise evaluation of a model’s practical performance. Moreover, MT-Bench effectively overcomes the limitations in traditional evaluation approaches, particularly in gauging a model’s competence in handling intricate multi-turn dialogue inquiries.
2307.03109#92
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
93
[The Start of Answer 1] A credit or debit card is basically just an easy way to allow a shop to speak to your bank. First you go into the bank, and you either deposit some cash into your account (debit), or you ask the bank nicely to cover you, and promise that you will pay them back at the end of the month (credit). Then you can go shopping. You get to the till with your products, which you now need to pay for, but instead of handing over cash directly, you ask the shop to deal directly with your bank. The shop phones up the bank and asks them to take X amount of money out of your account, then nip over to the shops own bank and give it to them on your behalf, which they do. That is pretty much how a credit or debit card works, just automated with computers... [The End of Answer 1]
2307.02762#93
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
93
Instead of focusing on specific tasks and evaluation metrics, HELM [114] provides a comprehen- sive assessment of LLMs. It evaluates language models across various aspects such as language understanding, generation, coherence, context sensitivity, common-sense reasoning, and domain- specific knowledge. HELM aims to holistically evaluate the performance of language models across different tasks and domains. For LLMs Evaluator, Zhang et al. [252] introduces LLMEval2, which en- compasses a wide range of capability evaluations. In addition, Xiezhi [59] presents a comprehensive suite for assessing the knowledge level of large-scale language models in different subject areas. The evaluation conducted through Xiezhi enables researchers to comprehend the notable limitations inherent in these models and facilitates a deeper comprehension of their capabilities in diverse fields. For evaluating language models beyond their existing capacities, BIG-bench [182] introduces a diverse collection of 204 challenging tasks contributed by 450 authors from 132 institutions. These tasks cover various domains such as math, childhood development, linguistics, biology, common-sense reasoning, social bias, physics, software development, etc. Recent work has led to the development of benchmarks for evaluating language models’ knowl- edge and reasoning abilities. The Knowledge-Oriented Language Model Evaluation KoLA [236]
2307.03109#93
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
94
[The End of Answer 1] [The Start of Answer 2] Debit cards are linked to a bank account and whenever you pay using a debit card, the amount is deducted from the bank account on completion of the transaction. So, instead of withdrawing and handing over cash for a purchase, you can swipe the card. Credit cards come with a credit limit and a defined billing cycle (generally monthly). So, you can make purchases using the credit card upto the defined credit limit. The bill is generated as per the defined billing cycle and you are required to settle it just like the bills for any other utilities. Hope this helps. [The End of Answer 2] [The Start of Reviewer 1’s Evaluation] Both Answer 1 and Answer 2 accurately describe how credit and debit cards work, and neither contains unsupported information. Answer 1 provides a more detailed and narrative explanation, using an analogy of a shop speaking to your bank. It explains the process of depositing money into a bank account (for a debit card) or asking the bank to cover your expenses (for a credit card). It then describes how the shop communicates with your bank to transfer money from your account to the shop’s account. This answer is coherent and easy to understand, especially for someone who may not be familiar with banking terms.
2307.02762#94
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
94
Recent work has led to the development of benchmarks for evaluating language models’ knowl- edge and reasoning abilities. The Knowledge-Oriented Language Model Evaluation KoLA [236] 5Note that as the evaluation of LLMs is a hot research area, it is very likely that we cannot cover all benchmarks. We welcome suggestions and comments to make this list perfect. J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:23 111:24 Chang et al. focuses on assessing language models’ comprehension and utilization of semantic knowledge for inference. As such, KoLA serves as an important benchmark for evaluating the depth of lan- guage understanding and reasoning in language models, thereby driving progress in language comprehension. To enable crowd-sourced evaluations of language tasks, DynaBench [94] supports dynamic benchmark testing. DynaBench explores new research directions including the effects of closed-loop integration, distributional shift characteristics, annotator efficiency, influence of expert annotators, and model robustness to adversarial attacks in interactive settings. Furthermore, to evaluate language models’ ability to learn and apply multidisciplinary knowledge across educational levels, the Multidisciplinary Knowledge Evaluation M3KE [122] was recently introduced. M3KE assesses knowledge application within the Chinese education system.
2307.03109#94
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
95
Answer 2, on the other hand, is more concise and uses more technical language. It explains that a debit card is linked to a bank account and the amount is deducted from the account when a purchase is made. It also explains that a credit card has a credit limit and a billing cycle, and that you can make purchases up to the credit limit. The bill is then generated and needs to be paid, similar to other utility bills. This answer is also coherent and accurate, but it may be less accessible to someone who is not familiar with banking terms. In terms of core information, both answers explain the basic functions of credit and debit cards. However, Answer 1 provides a more detailed explanation of the process, while Answer 2 is more concise and to the point. In conclusion, both answers are correct and coherent, but Answer 1 provides a more detailed and accessible explanation, while Answer 2 is more concise and uses more technical language. 1 [The End of Reviewer 1’s Evaluation]
2307.02762#95
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
95
The development of standardized benchmarks for evaluating LLMs on diverse tasks has been an important research focus. MMLU [70] provides a comprehensive suite of tests for assessing text models in multi-task contexts. AlpacaEval [112] stands as an automated evaluation benchmark, which places its focus on assessing the performance of LLMs across various natural language processing tasks. It provides a range of metrics, robustness measures, and diversity evaluations to gauge the capabilities of LLMs. AlpacaEval has significantly contributed to advancing LLMs in diverse domains and promoting a deeper understanding of their performance. Furthermore, AGIEval [262], serves as a dedicated evaluation framework for assessing the performance of foundation models in the domain of human-centric standardized exams. Moreover, OpenLLM [80] functions as an evaluation benchmark by offering a public competition platform for comparing and assessing different LLM models’ performance on various tasks. It encourages researchers to submit their models and compete on different tasks, driving progress and competition in LLM research.
2307.03109#95
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
96
1 [The End of Reviewer 1’s Evaluation] [The Start of Reviewer 2’s Evaluation] Answer 1: - Unsupported information: The answer includes the statement "you ask the bank nicely to cover you." This statement is subjective and not a factual representation of how credit cards work. - Core information: The answer explains that credit cards allow a shop to communicate with the bank, and it describes the process of the shop requesting money from the bank on behalf of the cardholder. - Coherence: The answer is coherent and provides a clear explanation of the process. Answer 2: - Unsupported information: The answer does not contain any unsupported information. - Core information: The answer explains that debit cards deduct the amount from the bank account immediately, while credit cards have a credit limit and require monthly bill settlement. - Coherence: The answer is coherent and provides a concise explanation of how debit and credit cards work. Based on the evaluation, both answers contain core information and are coherent. However, answer 1 includes unsupported information, while answer 2 does not. Therefore, answer 2 is the more accurate and reliable response. 2 [The End of Reviewer 2’s Evaluation]
2307.02762#96
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
96
As for tasks beyond standard performance, there are benchmarks designed for OOD, adversarial robustness, and fine-tuning. GLUE-X [234] is a novel attempt to create a unified benchmark aimed at evaluating the robustness of NLP models in OOD scenarios. This benchmark emphasizes the significance of robustness in NLP and provides insights into measuring and enhancing the robustness of models. In addition, Yuan et al. [239] presents BOSS, a benchmark collection for assessing out-of-distribution robustness in natural language processing tasks. PromptBench [264] centers on the importance of prompt engineering in fine-tuning LLMs. It provides a standardized evaluation framework to compare different prompt engineering techniques and assess their impact on model performance. PromptBench facilitates the enhancement and optimization of fine-tuning methods for LLMs. To ensure impartial and equitable evaluation, PandaLM [216] is introduced as a discriminative large-scale language model specifically designed to differentiate among multiple high-proficiency LLMs through training. In contrast to conventional evaluation datasets that predominantly emphasize objective correctness, PandaLM incorporates crucial subjective elements, including relative conciseness, clarity, adherence to instructions, comprehensiveness, and formality. 4.2 Benchmarks for Specific Downstream Tasks Other than benchmarks for general tasks, there exist benchmarks specifically designed for certain downstream tasks.
2307.03109#96
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
97
2 [The End of Reviewer 2’s Evaluation] [System] Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence. In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line. GPT-4: After reading Reviewer 2’s evaluation, I understand their point about the subjective language used in Answer 1. However, I still believe that the analogy used in Answer 1 is helpful for someone who may not be familiar with banking terms, and it does not detract from the core information or coherence of the answer. Therefore, I maintain my preference for Answer 1 due to its detailed and accessible explanation. 1 [System] You are reviewer 2, discussing with reviewer 1 about your reviews of the following answers. Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line.
2307.02762#97
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
97
4.2 Benchmarks for Specific Downstream Tasks Other than benchmarks for general tasks, there exist benchmarks specifically designed for certain downstream tasks. Question-answering benchmarks have become a fundamental component in the assessment of LLMs and their overall performance. MultiMedQA [177] is a medical QA benchmark that focuses on medical examinations, medical research, and consumer healthcare questions. It consists of seven datasets related to medical QA, including six existing datasets and one new dataset. The goal of this benchmark is to evaluate the performance of LLMs in terms of clinical knowledge and QA abilities. To assess the ability of LLMs in dynamic QA about current world knowledge, Vu et al. [198] introduced FRESHQA. By incorporating relevant and current information retrieved from search engines into prompts, there is a significant enhancement in the performance of LLMs on J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. # A Survey on Evaluation of Large Language Models FRESHQA. To effectively assess in-depth dialogue, Wang et al. [205] introduced the Dialogue CoT, incorporating two efficient dialogue strategies: Explicit CoT and CoT.
2307.03109#97
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
98
GPT-3.5: After considering the points made by Reviewer 1, I agree that the subjective language used in Answer 1 could be seen as unsupported information. Additionally, Answer 2 provides a concise and accurate explanation without any unsupported information. Therefore, I have changed my preference to Answer 2. 2 [System] You are reviewer 1, discussing with reviewer 2 about your reviews of the following answers. Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line. GPT-4: After considering Reviewer 2’s points, I still believe that Answer 1 provides a more detailed and accessible explanation for someone who may not be familiar with banking terms. The subjective language used does not detract from the core information or coherence of the answer. Therefore, I maintain my preference for Answer 1. 1
2307.02762#98
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
98
The assessment of LLMs in diverse and demanding tasks has garnered substantial attention in recent research. To this end, a range of specialized benchmarks have been introduced to evalu- ate LLMs’ capabilities in specific domains and applications. Among these, ARB, as presented by Sawada et al. [171], focuses on probing the performance of LLMs in advanced reasoning tasks spanning multiple domains. Additionally, ethical considerations in LLMs have become an area of paramount importance. TRUSTGPT, as tailored by Huang et al. [79], addresses critical ethical dimensions, including toxicity, bias, and value alignment, within the context of LLMs. Furthermore, the simulation of human emotional reactions by LLMs remains an area with significant potential for improvement, as highlighted by the EmotionBench benchmark by Huang et al. [76]. In terms of security evaluation, Zhang et al. [254] have introduced SafetyBench, a benchmark specifically designed to test the security performance of a range of popular Chinese and English LLMs. The results of this evaluation reveal substantial security flaws in current LLMs. To evaluate the daily decision-making capabilities of intelligent systems, Hou et al. [75] introduced
2307.03109#98
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
99
Answer 1. 1 [System] You are reviewer 2, discussing with reviewer 1 about your reviews of the following answers. Read the reviews and discussions above, and make a decision if to change your preference, and explain. Remember we focus on unsupported information, core information and coherence In a new line, choose between answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line. GPT-3.5: After considering Reviewer 1’s points, I understand their perspective on the accessibility of Answer 1. While the subjective language used may not necessarily detract from the core information or coherence, it is important to prioritize accuracy and reliability in providing information. Therefore, I am changing my preference to Answer 2, which provides a concise and accurate explanation without any unsupported information. 2
2307.02762#99
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
99
evaluation reveal substantial security flaws in current LLMs. To evaluate the daily decision-making capabilities of intelligent systems, Hou et al. [75] introduced Choice-75. Addition- ally, to assess LLMs’ aptitude in understanding complex instructions, He et al. [66] have introduced CELLO. This benchmark encompasses the design of eight distinctive features, the development of a comprehensive evaluation dataset, and the establishment of four evaluation criteria alongside their respective measurement standards.
2307.03109#99
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
100
Other specific benchmarks such as C-Eval [78], which is the first extensive benchmark to assess the advanced knowledge and reasoning capabilities of foundation models in Chinese. Additionally, Li et al. [108] introduces CMMLU as a comprehensive Chinese proficiency standard and evaluates the performance of 18 LLMs across various academic disciplines. The findings reveal that the majority of LLMs demonstrate suboptimal performance in Chinese language environments, highlighting areas for improvement. M3Exam [250] provides a unique and comprehensive evaluation framework that incorporates multiple languages, modalities, and levels to test the general capabilities of LLMs in diverse contexts. Additionally, GAOKAO-Bench [245] provides a comprehensive evaluation benchmark for gauging the proficiency of large language models in intricate and context-specific tasks, utilizing questions sourced from the Chinese Gaokao examination. On the other hand, SOCKET [23] serves as an NLP benchmark designed to evaluate the performance of LLMs in learning and recognizing social knowledge concepts. It consists of several tasks and case studies to assess the limitations of LLMs in social capabilities. MATH [72] concentrates on assessing reasoning and problem-solving
2307.03109#100
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
101
several tasks and case studies to assess the limitations of LLMs in social capabilities. MATH [72] concentrates on assessing reasoning and problem-solving proficiencies of AI models within the domain of mathematics. APPS [68] is a more comprehensive and rigorous benchmark for evaluating code generation, measuring the ability of language models to generate python code according to natural language specifications. CUAD [71] is an expert-annotated, domain-specific legal contract review dataset that presents a challenging research benchmark and potential for enhancing deep learning models’ performance in contract understanding tasks. CVALUES [230] introduces a humanistic evaluation benchmark to assess the alignment of LLMs with safety and responsibility standards. In the realm of comprehensive Chinese medicine, Wang et al. [211] introduced CMB, a medical evaluation benchmark rooted in the Chinese language and culture. It addresses the potential inconsistency in the local context that may arise from relying solely on English-based medical assessments. In the realm of hallucination assessment, [116] has developed UHGEval, a benchmark specifically designed to evaluate the performance of Chinese LLMs in text generation without being constrained by
2307.03109#101
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
102
of hallucination assessment, [116] has developed UHGEval, a benchmark specifically designed to evaluate the performance of Chinese LLMs in text generation without being constrained by hallucination-related limitations. In addition to existing evaluation benchmarks, there is a research gap in assessing the effectiveness of utilizing tools for LLMs. To address this gap, the API-Bank benchmark [109] is introduced as the first benchmark explicitly designed for tool-augmented LLMs. It comprises a comprehensive Tool-Augmented LLM workflow, encompassing 53 commonly used API tools and 264 annotated
2307.03109#102
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
103
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:25 111:26 Chang et al. Table 8. Summary of new LLMs evaluation protocols. Method Human-in-the-loop Crowd-sourcing testing DynaBench [94], DynaBoard [132], DynamicTempLAMA [135], DynaTask [188] More challenging tests HELM [114], AdaFilter [157], CheckList [165], Big-Bench [182], DeepTest [190] dialogues, encompassing a total of 568 API calls. Furthermore, the ToolBench project [191] aims to empower the development of large language models that effectively leverage the capabilities of general-purpose tools. By providing a platform for creating optimized instruction datasets, the ToolBench project seeks to drive progress in language models and enhance their practical applications. To evaluate LLMs in multi-turn interactions, Wang et al. [213] proposed MINT, which utilizes tools and natural language feedback.
2307.03109#103
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
104
4.3 Benchmarks for Multi-modal task For the evaluation of Multimodal Large Language Models (MLLMs), MME [46] serves as an extensive evaluative benchmark, aiming to assess their perceptual and cognitive aptitudes. It employs meticu- lously crafted instruction-answer pairs alongside succinct instruction design, thereby guaranteeing equitable evaluation conditions. To robustly evaluate large-scale vision-language models, Liu et al. [126] introduced MMBench, which comprises a comprehensive dataset and employs a CircularEval assessment method. Additionally, MMICL [255] enhances visual language models for multimodal inputs and excels in tasks such as MME and MMBench. Furthermore, LAMM [235] extends its research to encompass multimodal point clouds. LVLM-eHub [231] undertakes an exhaustive evaluation of LVLMs using an online competitive platform and quantitative capacity assessments. To comprehensively assess the generative and understanding capabilities of Multi-modal Large Language Models (MLLMs), Li et al. [107] introduced a novel benchmark named SEED-Bench. This benchmark consists of 19,000 multiple-choice questions that have been annotated by human assessors. Additionally, the evaluation covers 12 different aspects, including the models’ proficiency in understanding patterns within images and videos. In summary, recent works have developed robust benchmarks and improved models that advance the study of multimodal languages.
2307.03109#104
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
105
5 HOW TO EVALUATE In this section, we introduce two common evaluation methods: automatic evaluation and human evaluation. Our categorization is based on whether or not the evaluation criterion can be automati- cally computed. If it can be automatically calculated, we categorize it into automatic evaluation; otherwise, it falls into human evaluation. 5.1 Automatic Evaluation Automated evaluation is a common, and perhaps the most popular, evaluation method that typically uses standard metrics and evaluation tools to evaluate model performance. Compared with human evaluation, automatic evaluation does not require intensive human participation, which not only saves time, but also reduces the impact of human subjective factors and makes the evaluation process more standardized. For example, both Qin et al. [159] and Bang et al. [6] use automated evaluation methods to evaluate a large number of tasks. Recently, with the development of LLMs, some advanced automatic evaluation techniques are also designed to help evaluate. Lin and Chen [121] proposed LLM-EVAL, a unified multidimensional automatic evaluation method for open-domain conversations with LLMs. PandaLM [216] can achieve reproducible and automated language model assessment by training an LLM that serves as the “judge” to evaluate different models. Proposing a J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. # A Survey on Evaluation of Large Language Models # Table 9. Key metrics of automatic evaluation.
2307.03109#105
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
106
# A Survey on Evaluation of Large Language Models # Table 9. Key metrics of automatic evaluation. General metrics Metrics Accuracy Calibrations Fairness Robustness Exact match, Quasi-exact match, F1 score, ROUGE score [118] Expected calibration error [60], Area under the curve [54] Demographic parity difference [242], Equalized odds difference [64] Attack success rate [203], Performance drop rate [264] self-supervised evaluation framework, Jain et al. [82] enabled a more efficient form of evaluating models in real-world deployment by eliminating the need for laborious labeling of new data. In addition, many benchmarks also apply automatic evaluation, such as MMLU [70], HELM[114], C-Eval [78], AGIEval [262], AlpacaFarm [38], Chatbot Arena [128], etc. Based on the literature that adopted automatic evaluation, we summarized the main metrics in automatic evaluation in Table 9. The key metrics include the following four aspects:
2307.03109#106
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
107
Based on the literature that adopted automatic evaluation, we summarized the main metrics in automatic evaluation in Table 9. The key metrics include the following four aspects: (1) Accuracy is a measure of how correct a model is on a given task. The concept of accuracy may vary in different scenarios and is dependent on the specific task and problem definition. It can be measured using various metrics such as Exact Match, F1 score, and ROUGE score. • Exact Match (EM) is a metric used to evaluate whether the model’s output in text generation tasks precisely matches the reference answer. In question answering tasks, if the model’s generated answer is an exact match with the manually provided answer, the EM is 1; otherwise, it is 0. • The F1 score is a metric for evaluating the performance of binary classification models, combining the model’s precision and recall. The formula for calculation is as follows: 𝐹 1 = 2×𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛×𝑅𝑒𝑐𝑎𝑙𝑙 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛+𝑅𝑒𝑐𝑎𝑙𝑙
2307.03109#107
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
108
ROUGE is primarily employed to assess the performance of tasks such as text summarization and machine translation, involving considerations of overlap and matching between texts. (2) Calibrations pertains to the degree of agreement between the confidence level of the model (2) Calibrations pertains to the degree of agreement between the confidence level of the model output and the actual prediction accuracy. output and the actual prediction accuracy. • Expected Calibration Error (ECE) is one of the commonly used metrics to evaluate model calibration performance [60]. Tian et al. [189] utilized ECE to study the calibration of RLHF- LMs, including ChatGPT, GPT-4, Claude 1, Claude 2 and Llama2. For the calculation of ECE, they categorize model predictions based on confidence and measure the average accuracy of the predictions within each confidence interval. • Area Under the Curve of selective accuracy and coverage (AUC) [54] is another commonly used metric.
2307.03109#108
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
109
(3) Fairness refers to whether the model treats different groups consistently, that is, whether the model’s performance is equal across different groups. This can include attributes such as gender, race, age, and more. DecodingTrust [201] employs the following two metrics for measuring fairness: • Demographic Parity Pifference (DPD) measures whether the model’s predictions are dis- tributed equally across different population groups. If predictions differ significantly between groups, the DPD is high, indicating that the model may be unfairly biased against different groups. The calculation of DPD involves the prediction of the model and the true label, and the following formula can be used: 𝐷𝑃𝐷 = 𝑃 ( ˆ𝑦|𝑍 = 1) − 𝑃 ( ˆ𝑦|𝑍 = 0), where ˆ𝑦 is the binary classification prediction of the model, Z is the identifier of the population group (usually binary, indicating two different groups, such as men and women), 𝑃 ( ˆ𝑦|𝑍 = 1) and 𝑃 ( ˆ𝑦|𝑍 = 0) respectively represent the probabilities of predicting the positive class in population 𝑍 = 1 and 𝑍 = 0.
2307.03109#109
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
111
• Equalized Odds Difference (EOD) aims to ensure that the model provides equal error rates across different populations, that is, the model’s prediction error probability distribution is similar for different populations. The calculation of EOD involves probabilities related to true positive (TP), true negative (TN), false positive (FP), and false negative (FN) predictions. The formula for EOD is as follows: 𝑚𝑎𝑥 {𝑃 ( ˆ𝑦 = 1|𝑌 = 1, 𝑍 = 1) − 𝑃 ( ˆ𝑦 = 1|𝑌 = 1, 𝑍 = 0), 𝑃 ( ˆ𝑦 = 1|𝑌 = 0, 𝑍 = 1) − 𝑃 ( ˆ𝑦 = 1|𝑌 = 0, 𝑍 = 0)} where ˆ𝑦 is the binary classification prediction of the model, 𝑌 is the true label, 𝑍 is the demographic group identifier (typically binary, representing two different groups), and 𝑃 ( ˆ𝑦 = 1|𝑌 = 1, 𝑍 = 1) denotes the probability of the model predicting a positive class when the true label
2307.03109#111
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
113
(4) Robustness evaluates the performance of a model in the face of various challenging inputs, including adversarial attacks, changes in data distribution, noise, etc. including adversarial attacks, changes in data distribution, noise, etc. e Attack Success Rate (ASR) serves as a metric for evaluating the adversarial robustness of LLMs [206]. Specifically, consider a dataset D = {(x;, yi}, containing N pairs of samples x; and ground truth yj. For an adversarial attack method A, given an input x, this method can produce adversarial examples A(x) to attack surrogate model f, with the success rate LIS(A(x)) #1 TIF@)=y] is calculated as: ASR = (x, yep) , where TJ is the indicator function [203]. • Performance Drop Rate (PDR), a new unified metric, effectively assesses the robustness of prompt in LLMs [264]. PDR quantifies the relative performance degradation after a prompt , where 𝐴 represents attack, and the formula is as follows: 𝑃𝐷𝑅 = 1 − the adversarial attack applied to prompt 𝑃, and 𝑀 denotes the evaluation function, which varies across different tasks [264].
2307.03109#113
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
114
5.2 Human Evaluation The increasingly strengthened capabilities of LLMs have certainly gone beyond standard evalua- tion metrics on general natural language tasks. Therefore, human evaluation becomes a natural choice in some non-standard cases where automatic evaluation is not suitable. For instance, in open-generation tasks where embedded similarity metrics (such as BERTScore) are not enough, human evaluation is more reliable [142]. While some generation tasks can adopt certain automatic evaluation protocols, human evaluation in these tasks is more favorable as generation can always go better than standard answers.
2307.03109#114
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
115
Human evaluation is a way to evaluate the quality and accuracy of model-generated results through human participation. Compared with automatic evaluation, manual evaluation is closer to the actual application scenario and can provide more comprehensive and accurate feedback. In the manual evaluation of LLMs, evaluators (such as experts, researchers, or ordinary users) are usually invited to evaluate the results generated by the model. For example, Ziems et al. [269] used the annotations from experts for generation. By human evaluation, Liang et al. [114] assessed on summarization and disinformation scenarios on 6 models and Bang et al. [6] evaluated analogical reasoning tasks. Bubeck et al. [15] did a series of human-crafted tests using GPT-4 and they found that GPT-4 performs close to or even exceeds human performance on multiple tasks. This evaluation requires human evaluators to actually test and compare the performance of the models, not just evaluate the models through automated evaluation metrics. Note that even human evaluations can have high variance and instability, which could be due to cultural and individual differences [155]. In practical applications, these two evaluation methods are considered and weighed in combination with the actual situation. Exploring the human evaluation methods of LLMs requires thoughtful attention to various crucial factors to guarantee the dependability and precision of assessments [178]. Table 10 provides a concise overview of the essential aspects of human evaluation, including the number of evaluators,
2307.03109#115
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
116
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. # A Survey on Evaluation of Large Language Models 111:29 # Table 10. Summary of key factors in human evaluation Evaluation Criteria Number of evaluators Evaluation rubrics Evaluator’s expertise level Key Factor Adequate representation [7], Statistical significance Accuracy [178], Relevance [261], Fluency [196], Transparency, Safety [85], Human alignment Relevant domain expertise [144], Task familiarity, Methodological training evaluation criteria, and evaluator’s expertise level. Primarily, the number of evaluators emerges as a crucial factor intricately intertwined with adequate representation and statistical significance. A judiciously chosen number of evaluators contributes to a more nuanced and comprehensive understanding of the LLMs under scrutiny, enabling a more reliable extrapolation of the results to a broader context. Furthermore, evaluation criteria are fundamental components of the human assessment process. Expanding upon the principles of the 3H rule (Helpfulness, Honesty, and Harmlessness) [4], we have elaborated them into the following 6 human assessment criteria. These criteria include accuracy, relevance, fluency, transparency, safety, and human alignment. Through the application of these standards, a thorough analysis of LLMs’ performance in syntax, semantics, and context is achieved, allowing for a more comprehensive evaluation of the quality of generated text.
2307.03109#116
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
117
(1) Accuracy [178] stands out as a pivotal criterion that assesses the precision and correctness of the generated text. It involves scrutinizing the extent to which the language model produces information that aligns with factual knowledge, avoiding errors and inaccuracies. (2) Relevance [261] focuses on the appropriateness and significance of the generated content. This criterion examines how well the text addresses the given context or query, ensuring that the information provided is pertinent and directly applicable. (3) Fluency [196] assesses the language model’s ability to produce content that flows smoothly, maintaining a consistent tone and style. A fluent text is not only grammatically correct but also ensures readability and a seamless user experience. Analysts evaluate how well the model avoids awkward expressions and abrupt shifts in language or topic, contributing to effective communication with users. (4) Transparency delves into the clarity and openness of the language model’s decision-making process. It involves assessing how well the model communicates its thought processes, enabling users to understand how and why certain responses are generated. A transparent model provides insights into its inner workings. (5) Safety [85] emerges as a critical criterion concerned with the potential harm or unintended consequences arising from the generated text. It examines the language model’s ability to avoid producing content that may be inappropriate, offensive, or harmful, ensuring the well-being of users and avoiding misinformation.
2307.03109#117
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
118
(6) Human alignment assesses the degree to which the language model’s output aligns with human values, preferences, and expectations. It considers the ethical implications of the generated content, ensuring that the language model produces text that respects societal norms and user expectations, promoting a positive interaction with human users. Lastly, the expertise level of evaluators is a critical consideration, encompassing relevant domain knowledge, task familiarity, and methodological training. Delineating the requisite expertise level for evaluators ensures that they possess the necessary background knowledge to accurately com- prehend and assess the domain-specific text generated by LLMs. This strategy adds a layer of rigor to the evaluation process, reinforcing the credibility and validity of the findings. J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:30 Chang et al. 6 SUMMARY In this section, we summarize the key findings based on our review in sections 3, 4, and 5.
2307.03109#118
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
119
111:30 Chang et al. 6 SUMMARY In this section, we summarize the key findings based on our review in sections 3, 4, and 5. First of all, we would like to highlight that despite all the efforts spent on summarizing existing works on evaluation, there is no evidence to explicitly show that one certain evaluation protocol or benchmark is the most useful and successful, but with different characteristics and focuses. This also demonstrates that not a single model can perform best in all kinds of tasks. The purpose of this survey is to go beyond simply determining the “best” benchmark or evaluation protocol. By summarizing and analyzing existing efforts on LLMs evaluation, we may identify the current success and failure cases of LLMs, derive new trends for evaluation protocols, and most importantly, propose new challenges and opportunities for future research. 6.1 Task: Success and Failure Cases of LLMs We now summarize the success and failure cases of LLMs in different tasks. Note that all the following conclusions are made based on existing evaluation efforts and the results are only dependent on specific datasets. 6.1.1 What can LLMs do well? • LLMs demonstrate proficiency in generating text [11, 14, 24] by producing fluent and precise linguistic expressions. • LLMs obtain impressive performance in tasks involving language understanding, including sentiment analysis [52, 129, 159], text classification [114, 154, 233], as well as the handling of factual input [159].
2307.03109#119
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
120
• LLMs demonstrate robust arithmetic reasoning capabilities [159] and excel in logical reasoning [124]. Moreover, they exhibit noteworthy proficiency in temporal reasoning [6]. Furthermore, more intricate tasks such as mathematical reasoning [226, 237, 244] and structured data inference [86, 151] have emerged as the prevailing benchmarks for evaluation. • LLMs exhibit robust contextual comprehension, enabling them to generate coherent responses that align with the given input [187]. • LLMs also achieve satisfying performance across several natural language processing tasks, including machine translation [6, 130, 208], text generation [20], and question answering [102, 114]. 6.1.2 When can LLMs fail? • Within the realm of NLI, LLMs exhibit subpar performance and encounter challenges in accurately representing human disagreements [105]. • LLMs exhibit restricted proficiency in discerning semantic similarity between events [184] and demonstrate substandard performance in evaluating fundamental phrases [166]. • LLMs have limited abilities on abstract reasoning [56], and are prone to confusion or errors in complex contexts [148].
2307.03109#120
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
121
• LLMs have limited abilities on abstract reasoning [56], and are prone to confusion or errors in complex contexts [148]. In linguistic contexts featuring non-Latin scripts and limited resources, LLMs manifest suboptimal performance [2, 6, 100, 250]. Furthermore, generative LLMs consistently display proficiency levels below the expected standards across various tasks and languages [2]. • LLMs demonstrate susceptibility when processing visual modal information [258]. Further- more, they have the capacity to assimilate, disseminate, and potentially magnify detrimental content found within the acquired training datasets, frequently encompassing toxic linguistic elements, including offensive, hostile, and derogatory language [53]. • LLMs may exhibit social biases and toxicity [37, 53, 153] during the generation process, resulting in the production of biased outputs. J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. A Survey on Evaluation of Large Language Models • LLMs may manifest credibility deficits [201], potentially giving rise to fabricated information or erroneous facts within dialogues [163, 253]. • LLMs have limitations in incorporating real-time or dynamic information [127], making them less suitable for tasks that require up-to-date knowledge or rapid adaptation to changing contexts. • LLMs is sensitive to prompts, especially adversarial prompts [264], which trigger new evalu- ations and algorithms to improve its robustness.
2307.03109#121
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
122
• LLMs is sensitive to prompts, especially adversarial prompts [264], which trigger new evalu- ations and algorithms to improve its robustness. 6.2 Benchmark and Evaluation Protocol With the rapid development and widespread use of LLMs, the importance of evaluating them in practical applications and research has become crucial. This evaluation process should include not only task-level evaluation but also a deep understanding of the potential risks they pose from a societal perspective. In this section, we summarize existing benchmarks and protocols in Table 8. First, a shift from objective calculation to human-in-the-loop testing, allowing for greater human feedback during the evaluation process. AdaVision [50], an interactive process for testing vision models, enables users to label a small amount of data for model correctness, which helps users identify and fix coherent failure modes. In AdaTest [164], the user filters test samples by only selecting high-quality tests and organizing them into semantically related topics. Second, a move from static to crowd-sourcing test sets is becoming more common. Tools like DynaBench [94], DynaBoard [132], and DynaTask [188] rely on crowdworkers to create and test hard samples. Additionally, DynamicTempLAMA [135] allows for dynamically constructed time-related tests.
2307.03109#122
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
123
Third, a shift from a unified to a challenging setting in evaluating machine learning models. While unified settings involve a test set with no preference for any specific task, challenging settings create test sets for specific tasks. Tools like DeepTest [190] use seeds to generate input transformations for testing, CheckList [165] builds test sets based on templates, and AdaFilter [157] adversarially constructs tests. However, it is worth noting that AdaFilter may not be entirely fair as it relies on adversarial examples. HELM [114] evaluates LLMs from different aspects, while the Big-Bench [182] platform is used to design hard tasks for machine learning models to tackle. PromptBench [264] aims to evaluate the adversarial robustness of LLMs by creating adversarial prompts, which is more challenging and the results demonstrated that current LLMs are not robust to adversarial prompts. 7 GRAND CHALLENGES AND OPPORTUNITIES FOR FUTURE RESEARCH Evaluation as a new discipline: Our summarization inspires us to redesign a wide spectrum of aspects related to evaluation in the era of LLMs. In this section, we present several grand challenges. Our key point is that evaluation should be treated as an essential discipline to drive the success of LLMs and other AI models. Existing protocols are not enough to thoroughly evaluate the true capabilities of LLMs, which poses grand challenges and triggers new opportunities for future research on LLMs evaluation.
2307.03109#123
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
124
7.1 Designing AGI Benchmarks As we discussed earlier, while all tasks can potentially serve as evaluation tools for LLMs, the question remains as to which can truly measure AGI capabilities. As we expect LLMs to demonstrate AGI abilities, a comprehensive understanding of the differences between human and AGI capacities becomes crucial in the creation of AGI benchmarks. The prevailing trend seems to conceptualize AGI as a superhuman entity, thereby utilizing cross-disciplinary knowledge from fields such as education, psychology, and social sciences to design innovative benchmarks. Nonetheless, there J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:31 111:32 Chang et al. remains a plethora of unresolved issues. For instance, does it make sense to use human values as a starting point for test construction, or should alternative perspectives be considered? Developing suitable AGI benchmarks presents many open questions demanding further exploration.
2307.03109#124
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
125
7.2 Complete Behavioral Evaluation An ideal AGI evaluation should contain not only standard benchmarks on common tasks, but also evaluations on open tasks such as complete behavioral tests. By behavioral test, we mean that AGI models should also be evaluated in an open environment. For instance, by treating LLMs as the central controller, we can construct evaluations on a robot manipulated by LLMs to test its behaviors in real situations. By treating LLMs as a completely intelligent machine, the evaluations of its multi-modal dimensions should also be considered. In fact, complete behavioral evaluations are complementary to standard AGI benchmarks and they should work together for better testing. 7.3 Robustness Evaluation Beyond general tasks, it is crucial for LLMs to maintain robustness against a wide variety of inputs in order to perform optimally for end-users, given their extensive integration into daily life. For instance, the same prompts but with different grammars and expressions could lead ChatGPT and other LLMs to generate diverse results, indicating that current LLMs are not robust to the inputs. While there are some prior work on robustness evaluation [206, 264], there are much room for advancement, such as including more diverse evaluation sets, examining more evaluation aspects, and developing more efficient evaluations to generate robustness tasks. Concurrently, the concept and definition of robustness are constantly evolving. It is thus vital to consider updating the evaluation system to better align with emerging requirements related to ethics and bias.
2307.03109#125
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
126
7.4 Dynamic and Evolving Evaluation Existing evaluation protocols for most AI tasks rely on static and public benchmarks, i.e., the evaluation datasets and protocols are often publicly available. While this facilitates rapid and convenient evaluation within the community, it is unable to accurately assess the evolving abilities of LLMs, given their rapid rate of development. The capabilities of LLMs may enhance over time which cannot be consistently evaluated by existing static benchmarks. On the other hand, as LLMs grow increasingly powerful with larger model sizes and training set sizes, static and public benchmarks are likely to be memorized by LLMs, resulting in potential training data contamination. Therefore, developing dynamic and evolving evaluation systems is the key to providing a fair evaluation of LLMs. 7.5 Principled and Trustworthy Evaluation When introducing an evaluation system, it is crucial to ascertain its integrity and trustworthiness. Therefore, the necessity for trustworthy computing extends to the requirement for reliable evalua- tion systems as well. This poses a challenging research question that intertwines with measurement theory, probability, and numerous other domains. For instance, how can we ensure that dynamic testing truly generates out-of-distribution examples? There is a scarcity of research in this domain, and it is hoped that future work will aim to scrutinize not only the algorithms but the evaluation system itself.
2307.03109#126
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
127
7.6 Unified Evaluation that Supports All LLMs Tasks There are many other research areas of LLMs and we need to develop evaluation systems that can support all kinds of tasks such as value alignment, safety, verification, interdisciplinary research, fine-tuning, and others. For instance, PandaLM [216] is an evaluation system that assists LLMs J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. A Survey on Evaluation of Large Language Models fine-tuning by providing an open-source evaluation model, which can automatically assess the performance of fine-tuning. We expect that more evaluation systems are becoming more general and can be used as assistance in certain LLMs tasks.
2307.03109#127
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
128
7.7 Beyond Evaluation: LLMs Enhancement Ultimately, evaluation is not the end goal but rather the starting point. Following the evaluation, there are undoubtedly conclusions to be drawn regarding performance, robustness, stability, and other factors. A proficient evaluation system should not only offer benchmark results but should also deliver an insightful analysis, recommendations, and guidance for future research and de- velopment. For instance, PromptBench [264] provides not only robustness evaluation results on adversarial prompts but also a comprehensive analysis through attention visualization, elucidating how adversarial texts can result in erroneous responses. The system further offers a word frequency analysis to identify robust and non-robust words in the test sets, thus providing prompt engineering guidance for end users. Subsequent research can leverage these findings to enhance LLMs. Another example is that Wang et al. [215] first explored the performance of large vision-language models on imbalanced (long-tailed) tasks, which demonstrates the limitation of current large models. Then, they explored different methodologies to enhance the performance on these tasks. In summary, enhancement after evaluation helps to build better LLMs and much can be done in the future.
2307.03109#128
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
129
8 CONCLUSION Evaluation carries profound significance, becoming imperative in the advancement of AI models, especially within the context of large language models. This paper presents the first survey to give a comprehensive overview of the evaluation on LLMs from three aspects: what to evaluate, how to evaluate, and where to evaluate. By encapsulating evaluation tasks, protocols, and benchmarks, our aim is to augment understanding of the current status of LLMs, elucidate their strengths and limitations, and furnish insights for future LLMs progression. Our survey reveals that current LLMs exhibit certain limitations in numerous tasks, notably reasoning and robustness tasks. Concurrently, the need for contemporary evaluation systems to adapt and evolve remains evident, ensuring the accurate assessment of LLMs’ inherent capabilities and limitations. We identify several grand challenges that future research should address, with the aspiration that LLMs can progressively enhance their service to humanity. ACKNOWLEDGEMENTS This work is supported in part by NSF under grant III-2106758. DISCLAIMER The goal of this paper is mainly to summarize and discuss existing evaluation efforts on large language models. Results and conclusions in each paper are original contributions of their corre- sponding authors, particularly for potential issues in ethics and biases. This paper may discuss some side effects of LLMs and the only intention is to foster a better understanding.
2307.03109#129
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
130
Due to the evolution of LLMs especially online services such as Claude and ChatGPT, it is very likely that they become stronger and some of the limitations described in this paper are mitigated (and new limitations may arise). We encourage interested readers to take this survey as a reference for future research and conduct real experiments in current systems when performing evaluations. Finally, the evaluation of LLMs is continuously developing, thus we may miss some new papers or benchmarks. We welcome all constructive feedback and suggestions. J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:33 111:34 Chang et al. # REFERENCES [1] Ahmed Abdelali, Hamdy Mubarak, Shammur Absar Chowdhury, Maram Hasanain, Basel Mousi, Sabri Boughorbel, Yassine El Kheir, Daniel Izham, Fahim Dalvi, Majd Hawasly, et al. 2023. Benchmarking Arabic AI with Large Language Models. arXiv preprint arXiv:2305.14982 (2023).
2307.03109#130
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
131
[2] Kabir Ahuja, Rishav Hada, Millicent Ochieng, Prachi Jain, Harshita Diddee, Samuel Maina, Tanuja Ganu, Sameer Segal, Maxamed Axmed, Kalika Bali, et al. 2023. Mega: Multilingual evaluation of generative ai. arXiv preprint arXiv:2303.12528 (2023). [3] Daman Arora, Himanshu Gaurav Singh, et al. 2023. Have LLMs Advanced Enough? A Challenging Problem Solving Benchmark For Large Language Models. arXiv preprint arXiv:2305.15074 (2023). [4] Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. 2021. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861 (2021).
2307.03109#131
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
132
[5] Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, et al. 2023. Benchmarking Foundation Models with Language-Model-as-an-Examiner. arXiv preprint arXiv:2306.04181 (2023). [6] Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. 2023. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023 (2023). [7] Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In 11th conference of the european chapter of the association for computational linguistics. 313–320.
2307.03109#132
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
133
[8] Daniel Berrar. 2019. Cross-Validation. [9] Ning Bian, Xianpei Han, Le Sun, Hongyu Lin, Yaojie Lu, and Ben He. 2023. Chatgpt is a knowledgeable but inexperi- enced solver: An investigation of commonsense problem in large language models. arXiv preprint arXiv:2303.16421 (2023). [10] Bojana Bodroza, Bojana M Dinic, and Ljubisa Bojic. 2023. Personality testing of GPT-3: Limited temporal reliability, but highlighted social desirability of GPT-3’s personality instruments results. arXiv preprint arXiv:2306.04308 (2023). [11] Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021).
2307.03109#133
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
134
[12] Nathan Brody. 1999. What is intelligence? International Review of Psychiatry 11, 1 (1999), 19–25. [13] Peter F Brown, Vincent J Della Pietra, Peter V Desouza, Jennifer C Lai, and Robert L Mercer. 1992. Class-based n-gram models of natural language. Computational linguistics 18, 4 (1992), 467–480. [14] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901. [15] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712 (2023).
2307.03109#134
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
135
[16] Yong Cao, Li Zhou, Seolhwa Lee, Laura Cabello, Min Chen, and Daniel Hershcovich. 2023. Assessing Cross-Cultural Alignment between ChatGPT and Human Societies: An Empirical Study. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP). 53–67. [17] Marco Cascella, Jonathan Montomoli, Valentina Bellini, and Elena Bignami. 2023. Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. Journal of Medical Systems 47, 1 (2023), 33. [18] Cayque Monteiro Castro Nascimento and André Silva Pimentel. 2023. Do Large Language Models Understand Chemistry? A Conversation with ChatGPT. Journal of Chemical Information and Modeling 63, 6 (2023), 1649–1655. [19] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 (2021).
2307.03109#135
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
136
[20] Yi Chen, Rui Wang, Haiyun Jiang, Shuming Shi, and Ruifeng Xu. 2023. Exploring the use of large language models for reference-free text quality evaluation: A preliminary empirical study. arXiv preprint arXiv:2304.00723 (2023). [21] Joseph Chervenak, Harry Lieman, Miranda Blanco-Breindel, and Sangita Jindal. 2023. The promise and peril of using a large language model to obtain clinical information: ChatGPT performs strongly as a fertility counseling tool with limitations. Fertility and Sterility (2023). [22] Yew Ken Chia, Pengfei Hong, Lidong Bing, and Soujanya Poria. 2023. INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models. arXiv preprint arXiv:2306.04757 (2023). J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. # A Survey on Evaluation of Large Language Models
2307.03109#136
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
137
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. # A Survey on Evaluation of Large Language Models [23] Minje Choi, Jiaxin Pei, Sagar Kumar, Chang Shu, and David Jurgens. 2023. Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large Language Models with SocKET Benchmark. arXiv preprint arXiv:2305.14938 (2023). [24] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 (2022). [25] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems 30 (2017).
2307.03109#137
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03109
138
[26] Benjamin Clavié, Alexandru Ciceu, Frederick Naylor, Guillaume Soulié, and Thomas Brightwell. 2023. Large Language Models in the Workplace: A Case Study on Prompt Engineering for Job Type Classification. In International Conference on Applications of Natural Language to Information Systems. Springer, 3–17. [27] Katherine M Collins, Albert Q Jiang, Simon Frieder, Lionel Wong, Miri Zilka, Umang Bhatt, Thomas Lukasiewicz, Yuhuai Wu, Joshua B Tenenbaum, William Hart, et al. 2023. Evaluating Language Models for Mathematics through Interactions. arXiv preprint arXiv:2306.01694 (2023). [28] Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine learning 20 (1995), 273–297. [29] Sunhao Dai, Ninglu Shao, Haiyuan Zhao, Weijie Yu, Zihua Si, Chen Xu, Zhongxiang Sun, Xiao Zhang, and Jun Xu.
2307.03109#138
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]