doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2307.03109
0
3 2 0 2 c e D 9 2 ] L C . s c [ 9 v 9 0 1 3 0 . 7 0 3 2 : v i X r a # A Survey on Evaluation of Large Language Models YUPENG CHANG∗ and XU WANG∗, School of Artificial Intelligence, Jilin University, China JINDONG WANG†, Microsoft Research Asia, China YUAN WU†, School of Artificial Intelligence, Jilin University, China LINYI YANG, Westlake University, China KAIJIE ZHU, Institute of Automation, Chinese Academy of Sciences, China HAO CHEN, Carnegie Mellon University, USA XIAOYUAN YI, Microsoft Research Asia, China CUNXIANG WANG, Westlake University, China YIDONG WANG, Peking University, China WEI YE, Peking University, China YUE ZHANG, Westlake University, China YI CHANG, School of Artificial Intelligence, Jilin University, China PHILIP S. YU, University of Illinois at Chicago, USA QIANG YANG, Hong Kong University of Science and Technology, China XING XIE, Microsoft Research Asia, China
2307.03109#0
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
0
3 2 0 2 v o N 0 2 ] L C . s c [ 3 v 2 7 1 3 0 . 7 0 3 2 : v i X r a # Lost in the Middle: How Language Models Use Long Contexts Nelson F. Liu1∗ Kevin Lin2 Michele Bevilacqua3 John Hewitt1 Fabio Petroni3 2University of California, Berkeley [email protected] Ashwin Paranjape3 Percy Liang1 1Stanford University 3Samaya AI # Abstract 20 Total Retrieved Documents (~4K tokens) While recent language models have the abil- ity to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their in- put contexts: multi-document question an- swering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant informa- tion, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when rele- vant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
2307.03172#0
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
1
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare auto- matically. Recent studies suggest and predomi- nantly use LLMs as a reference-free metric for open-ended question answering. More specif- ically, they use the recognized “strongest” LLM as the evaluator, which conducts pair- wise comparisons of candidate models’ an- swers and provides a ranking score. How- ever, this intuitive method has multiple prob- lems, such as bringing in self-enhancement (fa- voring its own answers) and positional bias. We draw insights and lessons from the edu- cational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evalua- tions. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM’s pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We
2307.02762#1
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
1
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, education, natural and social sciences, agent applications, and other areas. Secondly, we answer the ‘where’ and ‘how’ questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing the performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several ∗Both authors contributed equally to this research. †Corresponding author.
2307.03109#1
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
1
Accuracy fon) oO ~~ ~ oO uw Oo uw el a Ist 5th 10th 15th 20th Position of Document with the Answer =—®-— gpt-3.5-turbo-0613 == gpt-3.5-turbo-0613 (closed-book) Figure 1: Changing the location of relevant information (in this case, the position of the passage that answers an input question) within the language model’s input con- text results in a U-shaped performance curve—models are better at using relevant information that occurs at the very beginning (primacy bias) or end of its input context (recency bias), and performance degrades significantly when models must access and use information located in the middle of its input context. # Introduction
2307.03172#1
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
2
prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our ap- proaches achieve higher accuracy and align better with human judgments, respectively. In- terestingly, PR can induce a relatively accu- rate self-ranking of models under the anony- mous setting, where each model’s name is un- revealed. Our work provides space to explore evaluating models that are hard to compare for humans.1
2307.02762#2
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
2
Authors’ addresses: Yupeng Chang, [email protected]; Xu Wang, [email protected], School of Artificial Intelligence, Jilin University, 2699 Qianjin St, Changchun, China, 130012; Jindong Wang, Microsoft Research Asia, Beijing, China, [email protected]; Yuan Wu, School of Artificial Intelligence, Jilin University, Changchun, China, [email protected]; Linyi Yang, Westlake University, Hangzhou, China; Kaijie Zhu, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Hao Chen, Carnegie Mellon University, Pennsylvania, USA; Xiaoyuan Yi, Microsoft Research Asia, Beijing, China; Cunxiang Wang, Westlake University, Hangzhou, China; Yidong Wang, Peking University, Beijing, China; Wei Ye, Peking University, Beijing, China; Yue Zhang, Westlake University, Hangzhou, China; Yi Chang, School of Artificial Intelligence, Jilin University, Changchun, China; Philip S. Yu, University of Illinois at Chicago, Illinois, USA; Qiang Yang, Hong Kong University of Science and Technology, Kowloon, Hong Kong, China; Xing Xie, Microsoft Research Asia, Beijing, China.
2307.03109#2
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
2
# Introduction Language models have become an important and flexible building block in a variety of user-facing language technologies, including conversational interfaces, search and summarization, and collabo- rative writing (Shuster et al., 2022; Thoppilan et al., 2022; Lee et al., 2022, inter alia). These models perform downstream tasks primarily via prompting: all relevant task specification and data to process is formatted as a textual input context, and the model returns a generated text completion. These input contexts can contain thousands of tokens, espe- cially when language models are used to process long documents (e.g., legal or scientific documents, conversation histories, etc.) or when language mod- els are augmented with external information (e.g., relevant documents from a search engine, database query results, etc; Petroni et al., 2020; Ram et al., 2023; Shi et al., 2023; Mallen et al., 2023; Schick et al., 2023, inter alia).
2307.03172#2
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
3
# Introduction With a rising number of large language models (LLMs) being developed ever more quickly re†Contribution distributed as follows. Xinya decided the project scope, research questions and proposed the initial ver- sion of methodolgies. Ruosen and Xinya designed the exper- iments. Ruosen and Teerth conducted experiments, analysis and data collection. All authors contributed to paper writing. 1We release all human/machine annotated pairwise com- parisons, generated answers, generated discussions, and imple- mentations at https://bcdnlp.github.io/PR_LLM_EVAL/. cently, evaluations become increasingly important as they encode values and priorities that the LLM community should improve upon (Jones and Gal- liers, 1995; Liang et al., 2022). At the same time, the evaluation becomes harder as well. For exam- ple, recent models finetuned with human feedback (RLHF) align with human preference more, but this capability usually cannot be reflected by de- cent performance on standard NLP benchmarks (e.g. MMLU (Hendrycks et al., 2020), ARC (Clark et al., 2018)). Furthermore, human queries span a diverse range of settings and scenarios that it is nearly impossible to list them all.
2307.02762#3
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
3
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. © 2018 Association for Computing Machinery. 0004-5411/2018/8-ART111 $15.00 https://doi.org/XXXXXXX.XXXXXXX J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111 111:2 Chang et al. future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey. CCS Concepts: • Computing methodologies → Natural language processing; Machine learning.
2307.03109#3
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
3
Handling these use-cases requires language mod- els to successfully operate over long sequences. Ex- isting language models are generally implemented with Transformers (Vaswani et al., 2017), which re- quire memory and compute that increases quadrat- ically in sequence length. As a result, Trans- former language models were often trained with relatively small context windows (between 512- 2048 tokens). Recent improvements in hardware (e.g., faster GPUs with more memory) and algo- rithms (Dai et al., 2019; Dao et al., 2022; Poli et al., Work partially completed as an intern at Samaya AI. 2023; Rubin and Berant, 2023, inter alia) have resulted in language models with larger context windows (e.g., 4096, 32K, and even 100K tokens), but it remains unclear how these extended-context language models make use of their input contexts when performing downstream tasks.
2307.03172#3
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
4
To tackle this discrepancy, open-ended questions are being used more often to test LLMs perfor- mance (Chiang et al., 2023). Then by default, evaluation is done by collecting human prefer- ences of pairwise comparisons and then calculating scores for each LLM to induce a general ranking. While the collection process is costly and time- consuming (Zheng et al., 2023), to automate and scale up the evaluation, most recent works utilize the strongest (a.k.a state-of-the-art) LLM as the judge (Dubois et al., 2023; Dettmers et al., 2023). However, various studies show that this method is problematic, as the pairwise comparison judgment provided usually contains various biases such as self-enhancement, bias towards long and verbose answers, and bias towards the first answer in a pair. Motivated by these limitations, we propose peer evaluation. The goal is to mitigate the biases in automated evaluations while still benefiting from LLM’s strong capability in reading and writing reviews. We propose Peer Rank and Discussion- based evaluation framework (PRD). The suit con- sists of two
2307.02762#4
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
4
CCS Concepts: • Computing methodologies → Natural language processing; Machine learning. Additional Key Words and Phrases: large language models, evaluation, model assessment, benchmark ACM Reference Format: Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, and Xing Xie. 2018. A Survey on Evaluation of Large Language Models . J. ACM 37, 4, Article 111 (August 2018), 45 pages. https://doi.org/ XXXXXXX.XXXXXXX 1 INTRODUCTION Understanding the essence of intelligence and establishing whether a machine embodies it poses a compelling question for scientists. It is generally agreed upon that authentic intelligence equips us with reasoning capabilities, enables us to test hypotheses, and prepares for future eventualities [92]. In particular, Artificial Intelligence (AI) researchers focus on the development of machine-based in- telligence, as opposed to biologically based intellect [136]. Proper measurement helps to understand intelligence. For instance, measures for general intelligence in human individuals often encompass IQ tests [12].
2307.03109#4
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
4
We empirically investigate this question via controlled experiments with a variety of state-of- the-art open (MPT-30B-Instruct, LongChat-13B (16K)) and closed (OpenAI’s GPT-3.5-Turbo and Anthropic’s Claude-1.3) language models in set- tings that require accessing and using information within an input context. In particular, our experi- ments make controlled changes to the input context size and the position of the relevant information within the input context and study their effects on language model performance. If language models can robustly use information within long input con- texts, then their performance should be minimally affected by the position of the relevant information in the input context.
2307.03172#4
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
5
LLM’s strong capability in reading and writing reviews. We propose Peer Rank and Discussion- based evaluation framework (PRD). The suit con- sists of two alternatives that share the same for- mat and goal – involving peer LLMs’ participa- tion as reviewers, to reach a more fair evaluation result where all peers mutually agree. We draw insights and lessons from educational psychology research, on methodologies of student peer reviewReviewers A | 2 i— 2 \ + Wa 6-4, 3 . Wi Pairwise Battles Contestants Win Rate Matrix x dot product. more score higher Score; Ll mmm | SCOPED Score3 A Normalize J Reviewer (Multi-round)Contestent Weight Vector Score Vector
2307.02762#5
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
5
Within the scope of AI, the Turing Test [193], a widely recognized test for assessing intelligence by discerning if responses are of human or machine origin, has been a longstanding objective in AI evolution. It is generally believed among researchers that a computing machine that successfully passes the Turing Test can be considered as intelligent. Consequently, when viewed from a wider lens, the chronicle of AI can be depicted as the timeline of creation and evaluation of intelligent models and algorithms. With each emergence of a novel AI model or algorithm, researchers invariably scrutinize its capabilities in real-world scenarios through evaluation using specific and challenging tasks. For instance, the Perceptron algorithm [49], touted as an Artificial General Intelligence (AGI) approach in the 1950s, was later revealed as inadequate due to its inability to resolve the XOR problem. The subsequent rise and application of Support Vector Machines (SVMs) [28] and deep learning [104] have marked both progress and setbacks in the AI landscape. A significant takeaway from previous attempts is the paramount importance of AI evaluation, which serves as a critical tool to identify current system limitations and inform the design of more powerful models.
2307.03109#5
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
5
We first experiment with multi-document ques- tion answering, which requires models to reason over provided documents to find relevant informa- tion and use it to answer a given question; this task mimics the retrieval-augmented generation setup underlying many commercial generative search and question answering applications (e.g., Bing Chat). In this setting, we control (i) the input context length by changing the number of documents in the input context (akin to retrieving more or less documents in retrieval-augmented generation), and (ii) control the position of the relevant information within the input context by changing the order of the documents to place the relevant document at the beginning, middle or end of the context.
2307.03172#5
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
6
Figure 1: The peer rank process (PR), where each model acts both as reviewers (A, B, C) and contestants (1, 2, 3). From the battles between contestants (pairwise comparisons), it induces a self-ranking. In this example, models A, B, C represent GPT-4, Bard, and Claude, respectively. ing (Walsh, 2014), as well as their impact and bene- fits (Cho and MacArthur, 2011; Yalch et al., 2019). More specifically, peer rank (PR) works for the tournament-style benchmarking setting where each LLM in pairwise matches produces an answer for an open-ended question. Instead of getting the av- erage/majority vote to decide the final preference scoring, we propose to apply higher weights to LLMs reviewers with stronger capabilities (Sec- tion 2.1). Peer discussion (PD) works for the general pairwise comparison setting. Given two “student” LLM’s answers, we prompt two other reviewer LLMs to have multi-turn discussions to reach a mutual agreement on the pairwise scoring. The process shares a similar format of LLM inter- acting with each other through conversations like two communicative agents (Li et al., 2023; Park et al., 2023; Fu et al., 2023b).
2307.02762#6
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
6
Recently, large language models (LLMs) have incited substantial interest across both academic and industrial domains [11, 219, 257]. As demonstrated by existing work [15], the great performance of LLMs has raised promise that they could be AGI in this era. LLMs possess the capabilities to solve diverse tasks, contrasting with prior models confined to solving specific tasks. Due to its great performance in handling different applications such as general natural language tasks and domain-specific ones, LLMs are increasingly used by individuals with critical information needs, such as students or patients. Evaluation is of paramount prominence to the success of LLMs due to several reasons. First, evaluating LLMs helps us better understand the strengths and weakness of LLMs. For instance, the PromptBench [264] benchmark illustrates that current LLMs are sensitive to adversarial prompts, thus a careful prompt engineering is necessary for better performance. Second, better evaluations can provide better guidance for human-LLMs interaction, which could inspire future interaction design and implementation. Third, the broad applicability of LLMs underscores the paramount J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. # A Survey on Evaluation of Large Language Models # A 111:3 111:3 # LLMs evaluation
2307.03109#6
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
6
We find that changing the position of relevant information in the input context can substantially affect model performance, indicating that current language models do not robustly access and use information in long input contexts. Furthermore, we observe a distinctive U-shaped performance curve (Figure 1); language model performance is highest when relevant information occurs at the very beginning (primacy bias) or end of its in- put context (recency bias), and performance sig- nificantly degrades when models must access and use information in the middle of their input con- text (§2.3). For example, when relevant infor- mation is placed in the middle of its input con- text, GPT-3.5-Turbo’s performance on the multidocument question task is lower than its perfor- mance when predicting without any documents (i.e., the closed-book setting; 56.1%). Furthermore, we find that models often have identical performance to their extended-context counterparts, indicating that extended-context models are not necessarily better at using their input context (§2.3).
2307.03172#6
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
7
reviews. After, we provide analysis for discussions, including prompting strategies’ effects on multi- turn peer discussions; what the discussion bias is; and the phenomenon of opinion-altering. Analy- sis of discussion bias shows that the model which leads discussions is less likely to alter its opinion. The opinion-altering analysis supports the previous analysis and describes that models’ overall ability is highly aligned with whether they can hold their opinions. # 2 Methodologies In general, peer rank can be applied to induce self- rank, e.g. to create a leaderboard of LLMs’ capa- bilities, while peer discussion provides more bene- fits in comparing the capabilities of two models (a more fine-grained and interactive comparison). We elaborate on their technical details in this section.
2307.02762#7
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
7
Natural language understanding: (1) Sentiment analysis: Bang et al. [6]/ Liang et al. [114]/ Lopez-Lira and Tang [129]/ Qin et al. [159]/ Wang et al. [218]/ Zhang et al. [251] (2) Text classification: Liang et al. [114] / Peña et al. [154] / Yang and Menczer [233] (3) Natural language inference: Lee et al. [105] / Qin et al. [159] (4) Others: Choi et al. [23] / Riccardi and Desai [166] / Tao et al. [184] Reasoning: Bang et al. [6] / Bian et al. [9] / Frieder et al. [45] / Fu et al. [47] / Gendron et al. [56] / Jiang et al. [86] / Liévin et al. [117] Liu et al. [124] / Orrù et al. [147] / Pan et al. [151] / Qin et al. [159] / Saparov et al. [170] / Wu et al. [227] / Wu et al. [226] Xu et al. [229] / Zhuang et
2307.03109#7
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
7
Given that language models struggle to retrieve and use relevant information in the multi-document question answering task, to what extent can lan- guage models even retrieve from their input con- texts? We study this question with a synthetic key- value retrieval task, which is designed to be a mini- mal testbed for the basic ability to retrieve matching tokens from the input context. In this task, models are given a collection of JSON-formatted key-value pairs and must return the value associated with a specific key. Similar to the multi-document QA task, the key-value retrieval task admits controlled changes to the input context length (adding more key-value pairs) and the position of relevant in- formation. Although some models perform the synthetic key-value retrieval task perfectly, other models struggle to simply retrieve matching tokens that occur in the middle of their input context and continue to exhibit a U-shaped performance curve. To better understand why language models strug- gle to robustly access and use information in their input contexts, we study the role of model archi- tecture (decoder-only vs. encoder-decoder), query- aware contextualization, and instruction fine-tuning (§4). We find that:
2307.03172#7
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
8
We conduct extensive experiments and analysis for testing PR and PD’s performance by provid- ing fair pairwise comparisons. PR is tested on Vi- cuna80, which contains pairwise judgments from human annotators. Our method improves corre- lations with human judgments and ranking sub- stantially, this paradigm also enables a group of LLMs to induce a self-ranking. PD is tested on both Vicuna80 and LFQA (Xu et al., 2023), which in- cludes annotated pairwise comparisons of Human- Machine and Machine-Machine answers. PD en- ables LLMs to achieve better pairwise comparisons that are more accurate than single model-based reviews, especially in improving weaker model’s # 2.1 Peer Rank and Scoring We provide an illustration of the peer rank algo- rithm in Figure 1. The general idea is to obtain weighted scores of each battle from the peer re- viewer’s judgment, then induce self-rankings from the scores. This process is iterated multiple times until the scores converge. Given a set of questions, Q, we generate an an- swer to each question for each language model. Let Am(q) be the answer generated to question q ∈ Q by model m. We then generate pairwise compar- isons between answers to the same question, using the language models themselves along with human annotators to compare answers.
2307.02762#8
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
8
Saparov et al. [170] / Wu et al. [227] / Wu et al. [226] Xu et al. [229] / Zhuang et al. [265] / Zhang et al. [244] Natural language processing Natural language generation: (1) Summarization: Bang et al. [6] / Liang et al. [114] / Pu and Demberg [158] / Qin et al. [159] (2) Dialogue: Bang et al. [6] / Lin and Chen [121] / Qin et al. [159] / Zheng et al. [259] (3) Translation: Bang et al. [6] / Lyu et al. [130] / Wang et al. [208] (4) Question answering: Bai et al. [5] / Bang et al. [6] / Bian et al. [9] / Laskar et al. [102] / Liang et al. [114] / Qin et al. [159] (5) Others: Chen et al. [20] / Chia et al. [22] / Pu and Demberg [158] Multilingual: Abdelali et al. [1] / Ahuja et al. [2] / Bang et al. [6] / Lai et
2307.03109#8
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
8
• Encoder-decoder models are relatively robust to changes in the position of relevant informa- tion within their input context, but only when evaluated on sequences within its training- time sequence length. When evaluated on sequences longer than those seen during train- ing, we observe a U-shaped performance curve (§4.1). • Query-aware contextualization (placing the query before and after the documents or key- value pairs) enables near-perfect performance on the synthetic key-value task, but minimally changes trends in multi-document QA (§4.2). • Even base language models (i.e., without in- struction fine-tuning) show a U-shaped per- formance curve as we vary the position of relevant information in the input context. Our results indicate that prompting language
2307.03172#8
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
9
annotators to compare answers. Using these automated and human pairwise com- parisons, we create a tournament among models, with each battle representing two models (the con- testants) competing to answer a question as best they can. The comparison of the answers in a battle by another model (the reviewer) forms a review. Let Kr(x, y) be the score given by the reviewer r to the pair of answers (x, y). We use a score of −1 to indicate the first answer is better, 0 to indicate a tie, and 1 to indicate the second answer is better. The score given by a model may be dependent on the order of answers provided. Suppose we have a set of reviewer models R and a set of contestant models C. We form a set of battle reviews, B = {(q, i, j, r, s) | q ∈ Q, (i, j) ∈ C2, r ∈ R} where s = Kr(Ai(q), Aj(q)) is the score given by reviewer r to the answers/responses generated by i and j for question q. We create a shorthand Kij r (q) for this review.
2307.02762#9
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
9
Demberg [158] Multilingual: Abdelali et al. [1] / Ahuja et al. [2] / Bang et al. [6] / Lai et al. [100] / Zhang et al. [250] Factuality: Gekhman et al. [55] / Honovich et al. [74] / Manakul et al. [133] / Min et al. [138] / Pezeshkpour [156] / Wang et al. [204] Robustness: Li et al. [111] / Liu et al. [123] / Wang et al. [207] / Wang et al. [206] / Yang et al. [234] / Zhao et al. [258] Zhu et al. [264] / Zhuo et al. [267] Robustness / Ethics/ Biases/ Trustworthiness Ethics and biases: Cao et al. [16] / Deshpande et al. [35] / Dhamala et al. [37] / Ferrara [42] / Gehman et al. [53] Hartmann et al. [65] / Hendrycks et al. [69] / Parrish et al. [153] / Rutinowski et al. [167] / Sheng et al. [175]
2307.03109#9
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
9
Our results indicate that prompting language models with longer input contexts is a trade-off— providing the language model with more informa- tion may help it perform the downstream task, but it also increases the amount of content that the model must reason over, potentially decreasing ac- curacy. To better understand this trade-off in prac- tice, we perform a case study with retriever-reader models on open-domain question answering (§5). In contrast to our controlled multi-document QA task, where the context always contains exactly one document that answers the question, none or many of the top k documents may contain the an- swer in the open-domain QA setting. When re- trieving from Wikipedia to answer queries from NaturalQuestions-Open, we find that model perfor- mance saturates long before retriever recall satu- rates, indicating that current models fail to effec- tively use additional retrieved documents—using 50 documents instead of 20 retrieved documents only marginally improves performance (∼1.5% for GPT-3.5-Turbo and ∼1% for claude-1.3).
2307.03172#9
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
10
Based on these peer reviews, we can evaluate models based on their performance, by calculating metrics such as the win rate of each contestant, and the Elo ratings of each contestant. Since each model is being ranked by its peers, we call this Peer Rank. # 2.1.1 Win rate Calculation The win rate for a contestant is the ratio of wins for that contestant divided by the number of battles it participates in. Ties are counted as 0.5 wins for both contestants. Our win rate calculation gives differing weight to the scores provided by different reviewers (A, B, C) based on the performance of the corresponding reviewers as a contestant (1, 2, 3). This operates on the assumption that models which are better con- testants are also more fit to evaluate and compare answers, so they should be given more weight in evaluation (Equation 2). In another way, since the score is a measure of their ability to review/grade correctly, we weigh the win rate an LLM gives an- other LLM by their own score (Walsh, 2014). Initially, all reviewers are given the same weight. On each iteration of the calculation, the win rate for each contestant is calculated using the current weights. The win rates are scaled to the range of [0, 1] using a linear scaling, and then again scaled so that their sum is 1, and these results are used as the weights for the next round.
2307.02762#10
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
10
Hendrycks et al. [69] / Parrish et al. [153] / Rutinowski et al. [167] / Sheng et al. [175] Simmons [176] / Wang et al. [209] / Zhuo et al. [266] / Zhao et al. [256] Trustworthiness: Hagendorff and Fabi [62] / Wang et al. [201] / Liu et al. [123] / Li et al. [113] / Rawte et al. [163] Xie et al. [228] / Zhang et al. [253] Social science Deroy et al. [34] / Frank [44] / Nay et al. [139] / Wu et al. [224] / Ziems et al. [269] Mathematics: Arora et al. [3] / Bubeck et al. [15] / Collins et al. [27]/ Dao and Le [31] / Wei et al. [221] / Wu et al. [225] Yuan et al. [241] / Yu et al. [237] Natural science & engineering General science: Arora et al. [3] / Castro Nascimento and Pimentel [18] / Guo et al. [61] Engineering:
2307.03109#10
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
10
Our analysis provides a better understanding of how language models use their input context and introduces new evaluation protocols for future long- context models; to claim that a language model can robustly use information within long input con- texts, it is necessary to show that its performance is minimally affected by the position of the rele- vant information in the input context (e.g., minimal difference in best- and worst-case performance). To facilitate further work on understanding and improving how language models use their input context, we release our code and evaluation data.1 # 2 Multi-Document Question Answering Our goal is to better understand how language mod- els use their input context. To this end, we analyze model performance on multi-document question answering, which requires models to find relevant information within an input context and use it to answer the question. In particular, we make con- trolled changes to the length of the input context and the position of the relevant information and measure changes in task performance. # 2.1 Experimental Setup In the multi-document question answering task, the model inputs are (i) a question to answer and (ii) k documents (e.g., passages from Wikipedia), where exactly one of the documents contains the answer 1nelsonliu.me/papers/lost-in-the-middle
2307.03172#10
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
11
so that their sum is 1, and these results are used as the weights for the next round. r be the raw win rate of contes- tant c ∈ C from the reviews of reviewer r ∈ R. This is equal to the number of times that c wins a battle plus half of the number of times that c ties, divided by the number of battles that c participates in. YY [rer@) + r-Ke"@)| q deC.d#e 2|Q|(|C| — 1) we q) where f (score) = score+1 2 maps a score of (loss = −1, tie = 0, win = 1) for the second contestant to a win count of (0, 0.5, 1), so that ties count as half of a win. r (q) when inputting it into f so that the win value of c is computed in- stead of d. Also, since there are |Q| questions, |C − 1| contestants to battle, and 2 orders for two contestants to battle, there are 2|Q||C − 1| battles involving a fixed contestant c.
2307.02762#11
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
11
& engineering General science: Arora et al. [3] / Castro Nascimento and Pimentel [18] / Guo et al. [61] Engineering: Bubeck et al. [15] / Liu et al. [125] / Pallagani et al. [150] / Sridhara et al. [181] / Valmeekam et al. [195] Valmeekam et al. [194] / Zhuang et al. [265] Medical queries: Chervenak et al. [21] / Duong and Solomon [39] / Hamidi and Roberts [63] / Holmes et al. [73] Jahan et al. [81] / Johnson et al. [87] / Samaan et al. [169] / Thirunavukarasu et al. [186] Medical applications Medical examination: Gilson et al. [57] / Kung et al. [97] Medical assistants: Cascella et al. [17] / Khan et al. [93] / Lahat et al. [99] / Lyu et al. [131] / Oh et al. [143] / Wang et al. [217] Agent applications Huang et al. [77] / Karpas et al. [90] / Parisi
2307.03109#11
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
11
1nelsonliu.me/papers/lost-in-the-middle to the question and k − 1 “distractor” documents do not. This task requires the model to access the document that contains the answer within its input context and use it to answer the question. Figure 2 presents an example. task with data from We instantiate this NaturalQuestions-Open 2019; (Lee Kwiatkowski et al., 2019), which contains historical queries issued to the Google search engine, coupled with human-annotated answers extracted from Wikipedia. In particular, we take the 2655 queries where the annotated long answer is a paragraph (as opposed to a list or a table). We use passages (chunks of at most 100 tokens) from Wikipedia as documents within our input contexts. For each of the queries, we need a document that contains the answer and k − 1 distractor documents that do not contain the answer. To obtain a document that answers the question, we use the Wikipedia paragraph that contains the answer from the NaturalQuestions annotations.
2307.03172#11
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
12
Let a* be the weight assigned to reviewer r af- ter iteration k. Initially, a2 = 1/|R|, so that all reviewers have the same weight, and the weights add to 1. — We assume each reviewer LLM has the same capabilities to start. The score of contestant c € C for iteration k is the weighted average of the raw win rates for contestant c. We set the weights for the next iteration to a”: scorek c = αk−1 r · W c r r∈R (2) αk = Normalize(MinMax(scorek)) where the weights are scaled to a range of [0, 1] and finally normalized to have sum equal to 1: S —min,er(S;,) MinMax(5) max;er(S;) — min-er(S;,) 6) Ss Normalize(S) = ———— Drer Sr Given this set of equations, we look for the fixed/converging point of the framework. This is reminiscent of the problem faced by the PageR- ank algorithm (Page et al., 1999). The detailed equivalent implementation of PR is shown in the Appendix Section 2. 2.1.2 Elo Calculation Another method for calculating the performance of a contestant relative to other contestants is the
2307.02762#12
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
12
et al. [143] / Wang et al. [217] Agent applications Huang et al. [77] / Karpas et al. [90] / Parisi et al. [152] / Qin et al. [160] / Qin et al. [161] / Schick et al. [172] / Shen et al. [174] Education: Dai et al. [30] / citetde Winter [32] / citetHellas et al. [67] / Wang and Demszky [210] / Wei et al. [221] Other applications Search and recommendation: Dai et al. [29] / Fan et al. [40] / Lanzi and Loiacono [101] / Sun et al. [183] / Thakur et al. [185] Xu et al. [232] / Yuan et al. [240] / Zhang et al. [246] Personality testing: Bodroza et al. [10] / Jentzsch and Kersting [84] / Liang et al. [115] / Safdari et al. [168] / Song et al. [180] / Wang et al. [212] Specific tasks: Lanzi and Loiacono [101] / Le and Zhang
2307.03109#12
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
12
To collect k − 1 distractor documents that do not contain the answer, we use a retrieval system (Con- triever, fine-tuned on MS-MARCO; Izacard et al., 2021) to retrieve the k − 1 Wikipedia chunks that are most relevant to the query and do not contain any of the NaturalQuestions-annotated answers.2,3 In the input context, the distractor documents are presented in order of decreasing relevance.4 To modulate the position of relevant information within the input context, we adjust the order of the documents to change the position of the document that contains the answer (Figure 3). To modulate the input context length in this task, we increase or decrease the number of retrieved documents that do not contain the answer (Figure 4). Following Kandpal et al. (2022) and Mallen et al. (2023), we use accuracy as our primary evaluation metric, judging whether any of the correct answers (as taken from the NaturalQuestions annotations) appear in the predicted output. 2Ambiguity in NaturalQuestions-Open means that a small number of distractor passages may contain a reasonable an- swer. We additionally run experiments on subset of unam- biguous questions, finding similar results and conclusions; see Appendix A. 3We also explored using random documents as distractors, see Appendix B for more details.
2307.03172#12
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
13
2.1.2 Elo Calculation Another method for calculating the performance of a contestant relative to other contestants is the credit or debit card is basically just an easy way to allow a shop to speak to your bank. First you go into the bank, [...] (anne How do credit/debit cards work? What is the process of putting money in and getting it ou 1?) ebit cards are linked to a bank account and whenever you pay [...] amount is deducted[...] narrative explanation, using an analogy of [...] Answer 2 the other hand, is jore concise and uses more technical language [...] 1 Ne \ / / (initia (Answer 1 provides a more detailed an (Answer 1 provides a basic overview of) ) eview [...] at a high level. However, [...] Answer 2 provides a more coherent B planation by separately discussing how LJ 2 NN 4) Discuss answer of 1 and 2 with reviews from A and B in mind [...] again output choice on a line » After considering Reviewer B's perspective, [...] While Answer 2 [...], | believe that the accessibility and comprehensiveness of Answer 1 outweigh the concise and technical nature of Answer 2. | can appreciate Reviewer A's perspective on [...] Upon reflection, for the purposes of this question, accessibility and comprehensiveness are most important [...] after considering Reviewer A's perspective, | would change my preference to Answer 1. y
2307.02762#13
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
13
[168] / Song et al. [180] / Wang et al. [212] Specific tasks: Lanzi and Loiacono [101] / Le and Zhang [103] / Wang et al. [216] General benchmarks Xiezhi [59]/MMLU [70]/ C-Eval [78]/OpenLLM [80]/DynaBench [94]/Chatbot Arena [128]/AlpacaEval [112]/HELM [114]/BIG-bench [182] PandaLM [216] / BOSS [239] / GLUE-X [234] KoLA [236] / AGIEval [262]/ PromptBench [264] / MT-Bench [260] / LLMEval2 [252] Specific benchmarks SOCKET [23] / Choice-75 [75] / CUAD [71] / TRUSTGPT [79] / MATH [72] / APPS [68] / CELLO [66] / EmotionBench [76] / CMMLU [108] API-Bank [109] / M3KE [122] / UHGEval [116] / ARB [171] / MultiMedQA [177] / CVALUES [230] / ToolBench [191] / FRESHQA [198] CMB
2307.03109#13
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
13
3We also explored using random documents as distractors, see Appendix B for more details. 4Since there might be a prior over “search results” appear- ing in ranked order, we explored randomly ordering the k − 1 distractor documents and mentioning that the documents are randomly ordered in the task description, but found the same trends. See Appendix C for more details. Input Context Write a high-quality answer for the given question using only the provided search results (some of which might be irrelevant). Document [1](Title: Asian Americans in science and technology) Prize in physics for discovery of the subatomic particle J/. Subrahmanyan Chandrasekhar shared... Document [2] (Title: List of Nobel laureates in Physics) The first Nobel Prize in Physics was awarded in 1901 to Wilhelm Conrad Réntgen, of Germany, who received... Document [3] (Title: Scientist) and pursued through a unique method, was essentially in place. Ramén y Cajal won the Nobel Prize in 1906 for his remarkable... # Question: # who got # the # first # nobel # prize # in # physics # Answer: # Desired Answer. Wilhelm Conrad Réntgen Figure 2: Example of the multi-document question answering task, with an input context and the desired model answer. The document containing the answer is bolded within the input context here for clarity.
2307.03172#13
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
14
Figure 2: The peer discussion process (PD). Blue and orange texts describe advantages of answer 1 and answer 2. In this example, finally the two LLM reviewers reach the mutual agreement of selecting answer 1 (human-written answer), which correlates with human reviewer preference. More discussion examples can be found in Appendix Section F. Elo rating (Elo, 1967; Askell et al., 2021). The Elo rating method takes a sequence of pairwise reviews and generates ratings for each contestant, with a greater rating indicating better performance. Based on the simlar idea, we assign different weights to reviewers based on their previous performance such that a review from a higher-weight reviewer has a greater influence upon Elo ratings. Similarly to the win rates calculation, we start with equal weights on all reviewers and then nor- malize the resulting Elo ratings to give weights for the next iteration. We repeat the Elo calculation with the new weights, update the weights based on the new ratings, and continue repeating until it converges.
2307.02762#14
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
14
[116] / ARB [171] / MultiMedQA [177] / CVALUES [230] / ToolBench [191] / FRESHQA [198] CMB [211] / MINT [213] / Dialogue CoT [205] / M3Exam [250] / GAOKAO-Bench [245] / SafetyBench [254] Multi-modal benchmarks MME [46] / MMBench [126] / SEED-Bench [107] / MM-Vet [238] / LAMM [235] / LVLM-eHub [231] Evaluation criterion Automatic evaluation: Bang et al. [6] / Jain et al. [82] / Lin and Chen [121] / Qin et al. [159] / Wang et al. [216] Human evaluation: Askell et al. [4] / Bang et al. [6] / Bubeck et al. [15] / Liang et al. [114] / Singhal et al. [178] / Ziems et al. [269] Tasks: success and failure cases of LLMs Human-in-the-loop: AdaVision [50] / AdaTest [164] Benchmark and evaluations Crowd-sourcing testing: DynaBench [94] / DynaBoard [132] /
2307.03109#14
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
14
Figure 2: Example of the multi-document question answering task, with an input context and the desired model answer. The document containing the answer is bolded within the input context here for clarity. _ Input Context Write a high-qual y answer for the given question ded search results (some of which might be irrelevant). using only the prov Document [1] (Title: List of Nobel laureates in Physics) ... Document [2] (Title: Asian Americans in science and technology) Document [3] (Title: Scientist) ... tion: who got the first nobel prize in physics a Answer: Desired Answer. Wilhelm Conrad Réntgen _ Input Context Write a high-quality answer for the given question (some of using only the provided search resul which might be irrelevant). Document [1] (Title: Asian Americans in science and technology) ... Document [2] (Title: Physics) ... Document [3] (Title: Document [4] (Title Document [5] (Title: List of Nobel laureates in Scientist) Norwegian Americans) ... Maria Goeppert Mayer) ... Question: who g Answer: the first nobel prize in physics Desired Answer. Wilhelm Conrad Réntgen Figure 3: Modulating the position of relevant informa- tion within the input context for the multi-document question answering example presented in Figure 2. Re- ordering the documents in the input context does not affect the desired output.
2307.03172#14
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
15
A brief overview of the actual Elo ratings cal- culation follows. All contestants start out with an initial rating of 1000. On each battle, the expected likelihood of each contestant winning is calculated based on the difference between their Elo ratings. The Elo rating of the winner is increased, and the rating of the loser is decreased. The magnitude of the Elo ratings change is inversely related to the outcome’s likelihood. In our calculations, we weight reviewers so that reviews by a high-weight reviewer cause larger changes in Elo. For more details, please refer to Appendix Sec- tion 2. # 2.2 Peer Discussions In Figure 2, we demonstrate the peer discussion process between two LLMs (A and B). The input is a given question and two answers, as well as the initial reviews by A and B. The two answers may be both generated by machines or one by human and another by machine (e.g. GPT-3 v.s. human answers). The two reviews are generated by LLMs (A and B), which are called reviewers/judges. They first conduct pairwise comparisons separately, pro- viding explanations and indicating their preferred answer by outputting the number 1 or 2 by the end (the prompt for getting initial reviews is listed in Appendix 10).
2307.02762#15
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
15
AdaVision [50] / AdaTest [164] Benchmark and evaluations Crowd-sourcing testing: DynaBench [94] / DynaBoard [132] / DynamicTempLAMA [135] / DynaTask [188] More challenging tasks: HELM [114] / AdaFilter [157] / CheckList [165] / Big-Bench [182] / DeepTest [190] / PromptBench [264] Challenges (1) Designing AGI benchmarks (2) Complete behavioral evaluation (3) Robustness evaluation (4) Dynamic and evolving evaluation (5) Principled and trustworthy evaluation (6) Unified evaluation that supports all LLMs tasks (7) Beyond evaluation: LLMs enhancement
2307.03109#15
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
15
Figure 4: Modulating the input context length of the multi-document question answering example presented in Figure 2. Adding documents that do not contain the answer increases the length of the input context, but does not affect the desired output. Our experimental setup is similar to the needle- in-a-haystack experiments of Ivgi et al. (2023), who compare question answering performance when the relevant paragraph is placed (i) at the beginning of the input or (ii) a random position within the in- put. They find that encoder-decoder models have significantly higher performance when relevant in- formation is placed at the start of the input context. In contrast, we study finer-grained changes in the position of relevant information. # 2.2 Models We analyze several state-of-the-art open and closed language models. We use greedy decoding when generating outputs and leave exploration of other decoding methods to future work. We use a stan- dard set of prompts for each model (Figure 2).
2307.03172#15
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
16
Then, the two models have a discussion about their reviews for multiple turns (the number of turns is fixed). The specific prompt for discus- sion is listed in Table 1. At the very beginning, a system prompt (role prompt) tells models their role – whether it is reviewer A or reviewer B (e.g. Claude or GPT-4). Then, all information, includ- ing the question and two comparison answers, as well as the initial reviews, are listed line by line. The order of initial reviews is the same as the or- der of reviewers in discussions. In other words, if reviewer A leads the discussion, reviewer A’s initial review is listed first. Right before the start of the discussion, the system prompt specifies the detailed requirements which provide explicit as- pects to focus on. Specifically, we draw insights from WebGPT (Nakano et al., 2021)’s annotation guideline (OpenAI, 2022). For long-form question answering, it mainly focuses on 1. Unsupported information: detecting informa- tion with no support, assume the worst case: that all of it is false. This aspect is most impor- tant and often determines the overall rating; 2. Core information: about whether the question has actually been answered;
2307.02762#16
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
16
# wo What to evaluate (Sec. 3) Where to evaluate (Sec. 4) # (ESC How to evaluate (Sec. 5) # Summary (Sec. 6) # Grand challenges (Sec. 7) Fig. 1. Structure of this paper. importance of ensuring their safety and reliability, particularly in safety-sensitive sectors such as financial institutions and healthcare facilities. Finally, as LLMs are becoming larger with more emergent abilities, existing evaluation protocols may not be enough to evaluate their capabilities and potential risks. Therefore, we aim to raise awareness in the community of the importance to LLMs evaluations by reviewing the current evaluation protocols and most importantly, shed light on future research about designing new LLMs evaluation protocols. J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:4 Chang et al.
2307.03109#16
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
16
Open models. We experiment with MPT-30B- Instruct, which has a maximum context length of 8192 tokens. The model was initially pre-trained on 1 trillion tokens using 2048-token sequences, followed by an additional sequence length adapta- tion pre-training phase on 50 billion tokens using 8192-token sequences. MPT-30B-Instruct uses AL- iBi (Press et al., 2022) to represent positional infor- mation. We also evaluate LongChat-13B (16K) (Li et al., 2023), which extends the LLaMA-13B (Tou- vron et al., 2023a) context window from 2048 to 16384 tokens by using condensed rotary positional embeddings before fine-tuning with 16384-token sequences. Closed models. We use the OpenAI API to ex- periment with GPT-3.5-Turbo and GPT-3.5-Turbo
2307.03172#16
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
17
2. Core information: about whether the question has actually been answered; 3. Coherence: generally, it is less important than the two above. Then the overall preference is finally determined. An alternative is to repeat the system requirement prompt after each turn. It is to ensure that the mod- els remember their role (reviewer 1 or 2) through- out the discussion history. In the Table and Figure, We omit the repeated part. # 3 Experiments # 3.1 Datasets We select two “meta-evaluation” datasets with hu- man annotations for pairwise comparisons, to mea- sure the correlation between our evaluation meth- ods and human judgments.
2307.02762#17
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
17
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:4 Chang et al. With the introduction of ChatGPT [145] and GPT-4 [146], there have been a number of research efforts aiming at evaluating ChatGPT and other LLMs from different aspects (Figure 2), encom- passing a range of factors such as natural language tasks, reasoning, robustness, trustworthiness, medical applications, and ethical considerations. Despite these efforts, a comprehensive overview capturing the entire gamut of evaluations is still lacking. Furthermore, the ongoing evolution of LLMs has also presented novel aspects for evaluation, thereby challenging existing evaluation protocols and reinforcing the need for thorough, multifaceted evaluation techniques. While existing research such as Bubeck et al. [15] claimed that GPT-4 can be seen as sparks of AGI, others contest this claim due to the human-crafted nature of its evaluation approach.
2307.03109#17
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
17
Closed models. We use the OpenAI API to ex- periment with GPT-3.5-Turbo and GPT-3.5-Turbo 30 Total Retrieved Documents (~6K tokens) 20 Total Retrieved Documents (~4K tokens) 10 Total Retrieved Documents (~2K tokens) Accuracy en = © » ~10- - - e O @-e->§_-e-=* Sse 50 ~e--¢ 10th 15th 20th Position of Document with the Answer Ist 5th 10th 15th 20th 25th 30th Position of Document with the Answer Ast 5th 10th Position of Document with the Answer Ast 5th —@= claude-1.3 =@= claude-1.3-100k © =@= gpt-3.5-turbo-0613 © =®= gpt-3.5-turbo-16k-0613 © =®= mpt-30b-instruct =®= longchat-13b-16k Figure 5: The effect of changing the position of relevant information (document containing the answer) on multi- document question answering performance. Lower positions are closer to the start of the input context. Performance is highest when relevant information occurs at the very start or end of the context, and rapidly degrades when models must reason over information in the middle of their input context.
2307.03172#17
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
18
LFQA (Xu et al., 2023) contains 140 long-form questions across seven domains (e.g. economics, history, and biology) and two candidate answers (from either GPT3 or Human) for each. Similar to ELI5 (Fan et al., 2019), it contains more recent (i.e. after July 2021) questions from Reddit fo- rums “r/explainlikeimfive” and “r/AskHistorians”. The authors collected expert-level annotations of which answer is better (overall preference). Since human preferences vary, authors report expert (dis)agreements, with a Fleiss’ κ (Fleiss, 1971; Fleiss et al., 2013) at around 0.5-0.6, and accuracy > 0.8 – which indicates moderate to substantial agreement.
2307.02762#18
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
18
This paper serves as the first comprehensive survey on the evaluation of large language models. As depicted in Figure 1, we explore existing work in three dimensions: 1) What to evaluate, 2) Where to evaluate, and 3) How to evaluate. Specifically, “what to evaluate" encapsulates existing evaluation tasks for LLMs, “where to evaluate" involves selecting appropriate datasets and benchmarks for evaluation, while “how to evaluate" is concerned with the evaluation process given appropriate tasks and datasets. These three dimensions are integral to the evaluation of LLMs. We subsequently discuss potential future challenges in the realm of LLMs evaluation. The contributions of this paper are as follows: (1) We provide a comprehensive overview of LLMs evaluations from three aspects: what to eval- uate, where to evaluate, and how to evaluate. Our categorization is general and encompasses the entire life cycle of LLMs evaluation. (2) Regarding what to evaluate, we summarize existing tasks in various areas and obtain insightful conclusions on the success and failure case of LLMs (Sec. 6), providing experience for future research. (3) As for where to evaluate, we summarize evaluation metrics, datasets, and benchmarks to provide a profound understanding of current LLMs evaluations. In terms of how to evaluate, we explore current protocols and summarize novel evaluation approaches.
2307.03109#18
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
18
(16K).5 GPT-3.5-Turbo has a maximum context length of 4K tokens, and GPT-3.5-Turbo (16K) is a version with an extended maximum context length of 16K tokens. We evaluate Claude-1.3 and Claude- 1.3 (100K) with the Anthropic API; Claude-1.3 has a maximum context length of 8K tokens, and Claude-1.3 (100K) has an extended context length of 100K tokens. 6 Model LongChat-13B (16K) MPT-30B-Instruct GPT-3.5-Turbo GPT-3.5-Turbo (16K) Claude-1.3 Claude-1.3 (100K) Closed-Book Oracle 35.0% 83.4% 31.5% 81.9% 56.1% 88.3% 56.0% 88.6% 48.3% 76.1% 48.2% 76.4% Table 1: Closed-book and oracle accuracy of language models on the multi-document question answering task. # 2.3 Results and Discussion
2307.03172#18
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
19
[System] You are reviewer A, discussing with reviewer B about your reviews of the following answers. [Question] {Q} [Answer1] {A1} [Answer2] {A2} [Init Review A] {Review of reviewer A} [Init Review B] {Review of reviewer B} [System] "Read the reviews and discussions above, and make a decision if to change your preference, and explain. ported information, core information, and co- herence. answer 1 and answer 2 by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line." [Reviewer A] {First-turn output} [Reviewer B] {Second-turn output} [Reviewer A]: Table 1: The discussion template for reviewer A at the third turn. All texts above are chat history and are used as input to reviewer A’s LLM model. Core aspects that we instruct the judging/reviewer model to focus on are in boldface.
2307.02762#19
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
19
(4) We further discuss future challenges in evaluating LLMs. We open-source and maintain the related materials of LLMs evaluation at https://github.com/MLGroupJLU/LLM-eval-survey to foster a collaborative community for better evaluations. The paper is organized as follows. In Sec. 2, we provide the basic information of LLMs and AI model evaluation. Then, Sec. 3 reviews existing work from the aspects of “what to evaluate”. After that, Sec. 4 is the “where to evaluate” part, which summarizes existing datasets and benchmarks. Sec. 5 discusses how to perform the evaluation. In Sec. 6, we summarize the key findings of this paper. We discuss grand future challenges in Sec. 7 and Sec. 8 concludes the paper. # 2 BACKGROUND 2.1 Large Language Models Language models (LMs) [36, 51, 96] are computational models that have the capability to understand and generate human language. LMs have the transformative ability to predict the likelihood of word sequences or generate new text based on a given input. N-gram models [13], the most common type of LM, estimate word probabilities based on the preceding context. However, LMs also face challenges, such as the issue of rare or unseen words, the problem of overfitting, and the difficulty in capturing complex linguistic phenomena. Researchers are continuously working on improving LM architectures and training methods to address these challenges.
2307.03109#19
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
19
Table 1: Closed-book and oracle accuracy of language models on the multi-document question answering task. # 2.3 Results and Discussion We experiment with input contexts containing 10, 20, and 30 total documents. Figure 5 presents multi- document question answering performance when varying the position of relevant information within the input context. To contextualize model perfor- mance, we also evaluate on the closed-book and oracle settings (Table 1). In the closed-book setting, models are not given any documents in their input context, and must rely on their parametric memory to generate the correct answer. On the other hand, in the oracle setting, language models are given the single document that contains the answer and must use it to answer the question. shaped performance curve—models are often much better at using relevant information that occurs at the very beginning (primacy bias) and very end of contexts (recency bias), and suffer degraded perfor- mance when forced to use information within the middle of its input context. For example, GPT-3.5- Turbo’s multi-document QA performance can drop by more than 20%—in the worst case, performance in 20- and 30-document settings is lower than per- formance without any input documents (i.e., closed- book performance; 56.1%). These results indicate that current models cannot effectively reason over their entire context window when prompted for downstream tasks.
2307.03172#19
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
20
Vicuna80 (Chiang et al., 2023) is a set of 80 open-ended questions from a diverse set of cate- gories, such as roleplay and writing. In the QLoRA work (Dettmers et al., 2023), authors annotated pairwise comparison scores (overall preference) across 7 models for each question. The scores in- clude 0, 1, 2 which correspond to tie, model 1 wins, and model 2 wins. We select pairwise compari- son annotations of 4 models’ answers (i.e. GPT4, ChatGPT-3.5., Bard, Vicuna-13b). To make our study more comprehensive, we add recent propri- etary language models such as Claude2. Specifi- cally, we also annotate pairwise comparisons be- tween Claude’s answers and the other 4 models’. We term this more complete version of the dataset Vicuna80. More details about the annotation pro- cess are provided in Appendix E. Since answers to open-ended questions are even harder to compare, the annotators achieve fair agreement. In total, there are 1-3 expert-level annotations for questions in LFQA; and there are 3 human annotations for each question in Vicuna80. We use human majority vote as the human preference during battles. # 2https://www.anthropic.com/index/
2307.02762#20
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
20
Large Language Models (LLMs) [19, 91, 257] are advanced language models with massive pa- rameter sizes and exceptional learning capabilities. The core module behind many LLMs such as GPT-3 [43], InstructGPT [149], and GPT-4 [146] is the self-attention module in Transformer [197] J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. # A Survey on Evaluation of Large Language Models A Survey on Evaluation of Large Language Models s r e p a p f o r e b m u N 35 Number of papers 30 25 20 15 10 5 0 2020 2021 2022 2023.01 2023.02 2023.03 2023.04 2023.05 2023.06+ Fig. 2. Trend of LLMs evaluation papers over time (2020 - Jun. 2023, including Jul. 2023.).
2307.03109#20
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
20
Model performance is highest when relevant in- formation occurs at the beginning or end of its input context. As illustrated in Figure 5, chang- ing the position of relevant information in the in- put context leads to substantial decreases in model performance. In particular, we see a distinctive U5We use the 0613 OpenAI model versions. 6We also evaluate GPT-4 (8K) on a subset of multi- document QA experiments, finding similar results and trends as other models (though GPT-4 has higher absolute perfor- mance). Evaluating GPT-4 on the full multi-document QA and key-value retrieval experiments would cost upwards of $6000. See Appendix D for GPT-4 results and discussion. Extended-context models are not necessarily bet- ter at using input context. When the input con- text fits in the context window of both a model and its extended-context counterpart, we see that performance between them is nearly identical. For example, the 10- and 20-document settings both fit in the context window of GPT-3.5-Turbo and GPT-3.5-Turbo (16K), and we observe that their performance as a function of position of relative information is nearly superimposed (solid purple and dashed brown series in Figure 5). These results Input Context
2307.03172#20
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
21
# 2https://www.anthropic.com/index/ introducing-claude GPT-4 All All (Weighted) Human Raters models Elo Rank Elo Rank Elo Rank Elo Rank GPT-4 Claude Vicuna GPT-3.5 Bard 1282 1150 883 878 (+10) 804 1 2 3 4 5 1165 1104 930 919 881 1 2 3 4 5 1213 (-23) 1125 (-2) 912 (-8) 894 856 (+8) 1 2 3 4 5 1236 1127 920 868 847 1 2 3 4 5 GPT-4 All All (Weighted) Human Raters models Win Rate Rank Win Rate Rank Win Rate Rank Win Rate Rank GPT-4 Claude Vicuna GPT-3.5 Bard 0.856 0.709 0.348 0.342 (+0.028) 0.245 1 2 3 4 5 0.749 0.662 0.393 (+0.004) 0.375 0.320 1 2 3 4 5 0.802 (-0.020) 0.685 (-0.004) 0.376 0.346 0.290 (+0.004) 1 2 3 4 5 0.822 0.689 0.389 0.314 0.286 1 2 3 4 5
2307.02762#21
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
21
Fig. 2. Trend of LLMs evaluation papers over time (2020 - Jun. 2023, including Jul. 2023.). that serves as the fundamental building block for language modeling tasks. Transformers have revolutionized the field of NLP with their ability to handle sequential data efficiently, allowing for parallelization and capturing long-range dependencies in text. One key feature of LLMs is in-context learning [14], where the model is trained to generate text based on a given context or prompt. This enables LLMs to generate more coherent and contextually relevant responses, making them suitable for interactive and conversational applications. Reinforcement Learning from Human Feedback (RLHF) [25, 268] is another crucial aspect of LLMs. This technique involves fine-tuning the model using human-generated responses as rewards, allowing the model to learn from its mistakes and improve its performance over time.
2307.03109#21
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
21
Extract the value corresponding to the specified key in the JSON object below. JSON data: {"2a8d601d-1d69-4e64-9£90-8ad825a74195": "a5 4e2ceed-e625-4570-9£74-3624e77d6684": "9f4a92b9-5£69-4725-bale-403f£08dea695": "52a9c80c-da51-4fc9-bf£70-4a4901be2ac3": "f4eb1c53-af0a-4dc4-a3a5-c2d50851a178": Key: Corresponding value: "bb3ba2a5-7de8-434b-a8 6e-a88bb9fa7289", "d1lff£29be-4e2a-4208-al82-O0cea716be3d4", "703a7ce5-£17£-4e6d-b895-5836ba5ec71c", "b2f8ea3d-4b1b-49e0-al41-b9823991ebeb",
2307.03172#21
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
22
Table 2: Global Rank Correlation Results. The upper table shows the correlation between LLM reviewer-based ranking and human rater’s ranking. Bottom table shows correlation between global win rates. Boldfaced numbers are the closest to scores from human raters. Blue numbers show the difference between the score from LLM reviewers and Human raters. For more detailed pairwise win rates, please refer to the heat maps in Section D. # 3.2 Setup and Metrics Following Wang et al. (2023a) and Xu et al. (2023), we first conduct example-level pairwise compar- isons. Specifically, each evaluation example (pair- wise comparison) consists of a question and a pair of long-form answers. We compare the model pre- dicted preference score against gold human prefer- ence, and report Accuracy and Fleiss’ κ. Following Dettmers et al. (2023), we also compare model- predicted global ranking scores against human- judged ranking scores. Specifically, we report Elo scores (Elo, 1967; Askell et al., 2021) and win rate (WR) based rankings (Table 2). For experiments on PR, we use All to denote our method where each reviewer has equal weights; and use All (weighted) to denote the setting where the final round weights to each reviewer.
2307.02762#22
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
22
In an autoregressive language model, such as GPT-3 and PaLM [24], given a context sequence 𝑋 , the LM tasks aim to predict the next token 𝑦. The model is trained by maximizing the probability of the given token sequence conditioned on the context, i.e., 𝑃 (𝑦|𝑋 ) = 𝑃 (𝑦|𝑥1, 𝑥2, ..., 𝑥𝑡 −1), where 𝑥1, 𝑥2, ..., 𝑥𝑡 −1 are the tokens in the context sequence, and 𝑡 is the current position. By using the chain rule, the conditional probability can be decomposed into a product of probabilities at each position: T P(yIX) =| | Plus. x2, ¥e-1), t=1 where 𝑇 is sequence length. In this way, the model predicts each token at each position in an autoregressive manner, generating a complete text sequence.
2307.03109#22
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.02762
23
For Vicuna-13b, we use the default version from Chiang et al. (2023). For all other API-based LLM models, we use specific versions of each. – we use GPT-4-0613, GPT-3.5-turbo-0613, Claude-1, and PaLM-2 for GPT-4, GPT-3.5, Claude, and Bard respectively. For more details, please refer to ap- pendix B. Reviewer Fleiss Kappa Accuracy GPT-3.5 Claude GPT-4 GPT-4 & Claude & GPT-3.5 All Reviewers (Weighted) 0.387 0.319 0.406 0.403 0.410 0.621 0.607 0.643 0.666 0.673 Table 3: Example-level correlation results, for the fourth and fifth rows, we take the peer reviewers’ ma- jority vote weighted by winrate. such as GPT-4 and Claude.
2307.02762#23
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
23
where 𝑇 is sequence length. In this way, the model predicts each token at each position in an autoregressive manner, generating a complete text sequence. One common approach to interacting with LLMs is prompt engineering [26, 222, 263], where users design and provide specific prompt texts to guide LLMs in generating desired responses or completing specific tasks. This is widely adopted in existing evaluation efforts. People can also engage in question-and-answer interactions [83], where they pose questions to the model and receive answers, or engage in dialogue interactions, having natural language conversations with LLMs. In conclusion, LLMs, with their Transformer architecture, in-context learning, and RLHF capabilities, have revolutionized NLP and hold promise in various applications. Table 1 provides a brief comparison of traditional ML, deep learning, and LLMs. J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:5 111:5 111:6 111:6 # Chang et al. Chang et al. # Table 1. Comparison of Traditional ML, Deep Learning, and LLMs Comparison Training Data Size Feature Engineering Model Complexity Interpretability Performance Hardware Requirements Traditional ML Deep Learning Large Manual Limited Good Moderate Low Large Automatic Complex Poor High High LLMs Very large Automatic Very Complex Poorer Highest Very High # h Syvows (EAM # =_ wher > ep nm (Process)
2307.03109#23
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
23
Extract the value corresponding to the specified key in the JSON object below. # Desired Output [ 703a7ce5-#17£-4e6d-b895-5836ba5eC7 le Figure 6: Example of the key-value retrieval task, with an input context and the desired model output. Given a key, the goal is to return the associated value. All keys and values are 128-bit UUIDs. The relevant key-value pair for answering the query is bolded here within the input context for clarity. indicate that extended-context models are not nec- essarily better than their non-extended counterparts at using their input context. # 3 How Well Can Language Models Retrieve From Input Contexts? much natural language semantics as possible (using random UUIDs instead), since language features may present potential confounders. For example, Transformer language models may have varying sensitivity to different linguistic features in their input (O’Connor and Andreas, 2021). Given that language models struggle to retrieve and use information from the middle of their input contexts in the multi-document question answering task, to what extent can they simply retrieve from input contexts? We study this question with a syn- thetic key-value retrieval task, which is designed to provide a minimal testbed for the basic ability to retrieve matching tokens from an input context.
2307.03172#23
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
24
such as GPT-4 and Claude. In Table 3, all reviewer combinations listed ex- cept Claude, when compared to human reviews at an example level, display a Fleiss κ of around 0.40, indicating fair to moderate agreement. There is a significant difference in accuracy between reviews. The worst reviewer is Claude, with an accuracy of only 60.69%. The best individual reviewer is GPT-4, with an accuracy of 64.25%. The combi- nation of reviewers (PR) increases this accuracy by a few percentage points, with our PR approach being highest at 67.31%. Inspecting Table 2, all combinations of ranking methods listed give the same ranking of models: Besides experiments on PR and PD respectively (Section 3.3 and Section 3.4), we also compare PR and PD in an experiment of judging answer qualities of GPT-3.5 v.s. Vicuna-13b (Table 6). # 3.3 Results for Peer Rank (PR) On the Vicuna80 dataset, we compare our PR method and representative LLM-based evaluations, GPT-4 > Claude > Vicuna > GPT-3.5 > Bard
2307.02762#24
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
24
# h Syvows (EAM # =_ wher > ep nm (Process) # Fig. 3. The evaluation process of AI models. 2.2 AI Model Evaluation AI model evaluation is an essential step in assessing the performance of a model. There are some standard model evaluation protocols, including 𝑘-fold cross-validation, holdout validation, leave one out cross-validation (LOOCV), bootstrap, and reduced set [8, 95]. For instance, 𝑘-fold cross- validation divides the dataset into 𝑘 parts, with one part used as a test set and the rest as training sets, which can reduce training data loss and obtain relatively more accurate model performance evaluation [48]; Holdout validation divides the dataset into training and test sets, with a smaller calculation amount but potentially more significant bias; LOOCV is a unique 𝑘-fold cross-validation method where only one data point is used as the test set [223]; Reduced set trains the model with one dataset and tests it with the remaining data, which is computationally simple, but the applicability is limited. The appropriate evaluation method should be chosen according to the specific problem and data characteristics for more reliable performance indicators.
2307.03109#24
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
24
To modulate the position of relevant information within the input context, we change the position of the key to retrieve within the serialized JSON object. To modulate the input context length, we change the number of input JSON key-value pairs k by adding or removing random keys, changing the number of distractor key-value pairs. # 3.2 Results and Discussion # 3.1 Experimental Setup In our synthetic key-value retrieval task, the inputs are (i) a string-serialized JSON object with k key- value pairs, where each of the keys and values are unique, randomly-generated UUIDs and (ii) a key within the aforementioned JSON object. The goal is to return the value associated with the specified key. Thus, each JSON object contains one relevant key-value pair (where the value is to be returned), and k − 1 irrelevant “distractor” key-value pairs. Figure 6 provides an example input context and its corresponding desired output. We again measure accuracy by evaluating whether the correct value appears in the predicted output. Our synthetic key-value retrieval task shares sim- ilar goals with the Little Retrieval Test of Papail- iopoulos et al. (2023) and the fine-grained line re- trieval task of Li et al. (2023), but we explicitly seek to distill and simplify the task by removing as
2307.03172#24
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
25
On the Vicuna80 dataset, we compare our PR method and representative LLM-based evaluations, GPT-4 > Claude > Vicuna > GPT-3.5 > Bard However, in terms of the Elo ratings provided by the human reviews, we clearly observe that GPT-4 clearly favor its own answers and is prone to self- enhancement bias. The method that produces the closest Elo ratings is our approach of the weighted 1350 1300] 1280] 1200] Elo of GPT-4 Match No. Figure 3: GPT-4 Elo scores every 100 battles on Vi- cuna80 dataset. Elo scores provided by GPT-4 reviewer are consistently higher than human ratings, while our All (weighted) ratings correlates with humans well. combination of all reviewers (“All weighted”). Fur- thermore, the method that produces the closest win rates (less than a 1% difference for many con- testants) is also All weighted. In the beginning, when the weight is the same for every reviewer (weights equal to one), the win rate given by “All reviewers” is low at about 0.749 partially because each reviewer is treated equally so that each re- viewer might have a preference for its own answer. After several rounds/iterations, the final win rate becomes more fair. We display the final round weights in Figure 4.
2307.02762#25
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
25
Figure 3 illustrates the evaluation process of AI models, including LLMs. Some evaluation protocols may not be feasible to evaluate deep learning models due to the extensive training size. Thus, evaluation on a static validation set has long been the standard choice for deep learning models. For instance, computer vision models leverage static test sets such as ImageNet [33] and MS COCO [120] for evaluation. LLMs also use GLUE [200] or SuperGLUE [199] as the common test sets. As LLMs are becoming more popular with even poorer interpretability, existing evaluation protocols may not be enough to evaluate the true capabilities of LLMs thoroughly. We will introduce recent evaluations of LLMs in Sec. 5. 3 WHAT TO EVALUATE What tasks should we evaluate LLMs to show their performance? On what tasks can we claim the strengths and weaknesses of LLMs? In this section, we divide existing tasks into the following categories: natural language processing, robustness, ethics, biases and trustworthiness, social sciences, natural science and engineering, medical applications, agent applications (using LLMs as agents), and other applications.1 1Note that LLMs are evaluated in various tasks and the categorization in this paper is only one possible way for classification of these works. There are certainly other taxonomies. J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. A Survey on Evaluation of Large Language Models
2307.03109#25
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
25
We experiment with input contexts containing 75, 140, and 300 key-value pairs (500 examples each). We use the same set of models as the multi- document question answering experiments, see §2.2 for more details. Figure 7 presents key-value retrieval perfor- mance. Claude-1.3 and Claude-1.3 (100K) do nearly perfectly on all evaluated input context lengths, but other models struggle, especially when contexts have 140 or 300 key-value pairs— although the synthetic key-value retrieval task only requires identifying exact match within the input context, not all models achieve high performance. Similar to our multi-document QA results, GPT- 3.5-Turbo, GPT-3.5-Turbo (16K), and MPT-30B- Instruct have the lowest performance when they must access key-value pairs in the middle of their input context. LongChat-13B (16K) exhibits a dif- ferent trend in the 140 key-value setting; we quali- tatively observe that when relevant information is
2307.03172#25
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
26
Lastly, in Figure 3, we draw the line chart of how GPT-4 Elo score changes as more battles are fed to the Elo algorithm. GPT-4’s score takes off as battle number increases. We can observe that GPT-4 displays self-enhancement across the entire process, while our PR approach-base evaluation correlates with human pairwise comparisons well. It appears that a weighted peer ranking also pro- vides a more accurate evaluation of language mod- els at the level of the global performance of models. At the example level, a weighted peer ranking also provides higher accuracy and a minimally higher agreement with human reviews. # 3.4 Results for Peer Discussions (PD) In this section, we demonstrate how the LLM dis- cussion process helps with the evaluations. Explicit and Generic Prompt We first conduct preliminary experiments to find a relatively good prompt for facilitating LLM peer discussions. In Table 4, we list the accuracy of GPT-4 and Claude’s initial pairwise comparison preference. They have gpt-4 claude vicuna gpt-3.5 bard 37.7% 48.8% 8.18% 5.31% 0% Figure 4: Peer rank final round weights of each re- viewer.
2307.02762#26
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
26
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. A Survey on Evaluation of Large Language Models 3.1 Natural Language Processing Tasks The initial objective behind the development of language models, particularly large language models, was to enhance performance on natural language processing tasks, encompassing both understanding and generation. Consequently, the majority of evaluation research has been primarily focused on natural language tasks. Table 2 summarizes the evaluation aspects of existing research, and we mainly highlight their conclusions in the following.2 3.1.1 Natural language understanding. Natural language understanding represents a wide spectrum of tasks that aims to obtain a better understanding of the input sequence. We summarize recent efforts in LLMs evaluation from several aspects.
2307.03109#26
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
26
75 Key-Value Pairs (~4K tokens) 140 Key-Value Pairs (~8K tokens) 300 Key-Value Pairs (~16K tokens) 200 ga 100 ee 100 @===g--—e--e— =—== vo-- Seuss - 2 *=3 - ? 90 eo. > -=<% 90 ~-~rg- ¢ 90 N vy Seger sc ae. \ 7 ° ~ 4 80 5 80 5 a 5, 80 S 7 fo fo s 70 £ 70 £ 70 ® Fi g 4 g Nan 60 g 60 g 60 i 50 50 50 Y 40 40 25th 50th Position of Key to Retrieve 75th Ast 35th —@= claude-1.3 =@= claude-1.3-100k © =@= gpt-3.5-turbo-0613 Position of Key to Retrieve —@- gpt-3.5-turbo-16k-0613 40 50th 100th 150th 200th 250th 300th Position of Key to Retrieve Toth 105th 140th Ast —®- mpt-30b-instruct = longchat-13b-16k # fo £ g g
2307.03172#26
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.03109
27
Sentiment analysis is a task that analyzes and interprets the text to determine the emotional inclination. It is typically a binary (positive and negative) or triple (positive, neutral, and negative) class classification problem. Evaluating sentiment analysis tasks is a popular direction. Liang et al. [114] and Zeng et al. [243] showed that the performance of the models on this task is usually high. ChatGPT’s sentiment analysis prediction performance is superior to traditional sentiment analysis methods [129] and comes close to that of GPT-3.5 [159]. In fine-grained sentiment and emotion cause analysis, ChatGPT also exhibits exceptional performance [218]. In low-resource learning environments, LLMs exhibit significant advantages over small language models [251], but the ability of ChatGPT to understand low-resource languages is limited [6]. In conclusion, LLMs have demonstrated commendable performance in sentiment analysis tasks. Future work should focus on enhancing their capability to understand emotions in under-resourced languages.
2307.03109#27
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
27
# fo £ g g Figure 7: The effect of changing the input context length and the position of relevant information on key-value retrieval performance. Lower positions are closer to the start of the input context. Although some models show perfect accuracy on this synthetic task (e.g., Claude-1.3 and Claude-1.3 (100K)), we see again that performance is often highest when relevant information is occurs at the very start or end of the context, and rapidly degrades when models must retrieve from the middle of the input context. placed at the start of the input context, LongChat- 13B (16K) tends to generate code to retrieve the key, rather than outputting the value directly. # 4 Why Are Language Models Not Robust to Changes in the Position of Relevant Information? Our multi-document question answering and key- value retrieval results show that language models struggle to robustly access and use information in long input contexts, since performance degrades significantly when changing the position of rele- vant information. To better understand why, we per- form some preliminary investigations into the role of model architecture (decoder-only vs. encoder- decoder), query-aware contextualization, and in- struction fine-tuning. # 4.1 Effect of Model Architecture
2307.03172#27
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
28
Table 4: Different prompting’s effect on disucssion ac- curacies (on the LFQA dataset). a moderate agreement with human preference, with GPT-4 leading around 5%. For the discussion- based evaluators, we report three types. By “GPT-4 lead”, we refer to the discussions where GPT-4 first expresses opinions; by “random”, we refer to dis- cussions where the leader is randomly picked. On the other side, when we use a generic prompt (such as “pick your preferred answer”), the discussion’s final preferences accuracy is around 0.69, higher than Claude’s initial judgment’s accuracy but lower than GPT-4’s. When we add more explicit aspects into the prompt3, the discussion accuracy boosts significantly (4% improvement). When we add to each’s turn’s prompt the role/identity informa- tion to remind the reviewer, the performance of GPT-4 leading discussions changes marginally, but Claude-leading discussions accuracy drops. Inves- tigating the effect of role information in the prompt is a potential future work direction.
2307.02762#28
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
28
Text classification and sentiment analysis are related fields, text classification not only focuses on sentiment, but also includes the processing of all texts and tasks. The work of Liang et al. [114] showed that GLM-130B was the best-performed model, with an overall accuracy of 85.8% for miscellaneous text classification. Yang and Menczer [233] found that ChatGPT can produce credibility ratings for a wide range of news outlets, and these ratings have a moderate correlation with those from human experts. Furthermore, ChatGPT achieves acceptable accuracy in a binary classification scenario (AUC=0.89). Peña et al. [154] discussed the problem of topic classification for public affairs documents and showed that using an LLM backbone in combination with SVM classifiers is a useful strategy to conduct the multi-label topic classification task in the domain of public affairs with accuracies over 85%. Overall, LLMs perform well on text classification and can even handle text classification tasks in unconventional problem settings as well.
2307.03109#28
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
28
# 4.1 Effect of Model Architecture The open models we evaluated are all decoder-only models—at each timestep, they may only attend to prior tokens. To better understand the poten- tial effects of model architecture on how language model use context, we compare decoder-only and encoder-decoder language models. relative positional embeddings, they can (in prin- ciple) extrapolate beyond these maximum context lengths; Shaham et al. (2023) find that both mod- els can perform well with sequences of up to 8K tokens.
2307.03172#28
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
29
General Accuracy In Table 5, we report accu- racies of discussions of multiple combinations of reviewers on LFQA. There are several trends: (1) when two models are of similar capabilities (e.g. GPT-4 and Claude), there are likely relatively large 3We select aspects from WebGPT annotation guidelines mentioned in the previous section. GPT4 & Claude GPT4 & GPT35 GPT35 & Claude GPT35 & GPT35-0.8 Claude & Claude-0.8 GTP4 & GPT4-0.8 R1 0.729 0.729 0.579 0.579 0.664 0.779 R2 0.671 0.579 0.671 0.650 0.707 0.757 R1 lead R2 lead Random 0.743 0.729 0.750 0.714 0.700 0.671 0.686 0.664 0.671 0.693 0.779 0.757 0.729 0.731 0.686 0.681 0.680 0.779 Table 5: Discussion accuracies on LFQA. Accuracy GPT-4 0.3500 GPT-4 & Claude All All (weighted) 0.3675 0.4375 0.4625
2307.02762#29
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
29
Natural language inference (NLI) is the task of determining whether the given “hypothesis” logically follows from the “premise”. Qin et al. [159] showed that ChatGPT outperforms GPT-3.5 for NLI tasks. They also found that ChatGPT excels in handling factual input that could be attributed to its RLHF training process in favoring human feedback. However, Lee et al. [105] observed LLMs perform poorly in the scope of NLI and further fail in representing human disagreement, which indicates that LLMs still have a large room for improvement in this field. Semantic understanding refers to the meaning or understanding of language and its associated concepts. It involves the interpretation and comprehension of words, phrases, sentences, and the relationships between them. Semantic processing goes beyond the surface level and focuses on understanding the underlying meaning and intent. Tao et al. [184] comprehensively evaluated the event semantic processing abilities of LLMs covering understanding, reasoning, and prediction about the event semantics. Results indicated that LLMs possess an understanding of individual events, but their capacity to perceive the semantic similarity among events is constrained. In reasoning tasks, LLMs exhibit robust reasoning abilities in causal and intentional relations, yet their 2Several NLP areas have intersections and thus our categorization of these areas is only one possible way to categorize.
2307.03109#29
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
29
Figure 8 compares the performance of decoder- only and encoder-decoder models. When Flan-UL2 is evaluated on sequences within its 2048-token training-time context window (Figure 8; left sub- plot), its performance is relatively robust to changes in the position of relevant information within the input context (1.9% absolute difference between best- and worst-case performance). When evalu- ated on settings with sequences longer than 2048 tokens (Figure 8; center and right), Flan-UL2 per- formance begins to degrade when relevant informa- tion is placed in the middle. Flan-T5-XXL shows a similar trend, where longer input contexts result in a greater performance degradation when placing relevant information in the middle of the input con- text. We hypothesize that encoder-decoder models may make better use of their context windows be- cause their bidirectional encoder allows processing each document in the context of future documents, potentially improving relative importance estima- tion between documents.
2307.03172#29
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
30
Accuracy GPT-4 0.3500 GPT-4 & Claude All All (weighted) 0.3675 0.4375 0.4625 Table 6: Comparing discussions (PD) and peer ranking (PR) on Vicuna80 (random order is applied to the GPT4 & Claude discussion). improvements upon initial reviews of both LLMs (2) when there is a substantial gap between re- viewer capabilities (e.g. GPT-4 and GPT-35), the discussion accuracy is usually below the stronger model’s initial accuracy and higher than the weaker model’s. (3) when models “self-discuss”, where we set different temperatures to create two variants of the same model, for example GPT-4 (tempera- ture=0 and =0.8). The weaker model usually can substantially “self-improve”, it is the case for GPT- 35, while GPT-4’s self-discussion brings only little improvements.
2307.02762#30
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
30
2Several NLP areas have intersections and thus our categorization of these areas is only one possible way to categorize. J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. 111:7 111:8 Chang et al. Table 2. Summary of evaluation on natural language processing tasks: NLU (Natural Language Under- standing, including SA (Sentiment Analysis), TC (Text Classification), NLI (Natural Language Inference) and other NLU tasks), Reasoning, NLG (Natural Language Generation, including Summ. (Summarization), Dlg. (Dialogue), Tran (Translation), QA (Question Answering) and other NLG tasks), and Multilingual tasks (ordered by the name of the first author). NLU SA TC NLI Others RNG. NLG Summ. Dlg. Tran. QA Others Mul. ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
2307.03109#30
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
30
We experiment with Flan-T5-XXL (Raffel et al., 2020; Chung et al., 2022) and Flan-UL2 (Tay et al., 2023). Flan-T5-XXL is trained with a sequences of 512 tokens (encoder and decoder). Flan-UL2 is initially trained with sequences of 512 tokens (en- coder and decoder), but is then pre-trained for an extra 100K steps with 1024 tokens (encoder and de- coder) before instruction fine-tuning on sequences with 2048 tokens in the encoder and 512 tokens in the decoder. However, since these models use # 4.2 Effect of Query-Aware Contextualization Our multi-document QA and key-value retrieval experiments place the query (i.e., question to an- swer or key to retrieve) after the data to process (i.e., the documents or the key-value pairs). As a result, decoder-only models cannot attend to query tokens when contextualizing documents or key- value pairs, since the query only appears at the end
2307.03172#30
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
31
In Table 6, we report results on GPT-3.5 v.s. Vicuna-13b comparisons on Vicuna80 questions, we see the GPT-4&Claude discussion increases the accuracy by over 1.5%. Also, we add the accuracy of the PR method in the Table. There the review becomes substantially fairer after weighted ranking. Future investigations on how to design better self- discussion strategies would be worth it. self- Peer enhancement bias According Zheng et al. (2023) and we previously discovered, large language models (LLMs) endure self-enhancement bias when acting as judges – preferring the answers they generate, or that of the models under the same series (e.g. GPT-4 and GPT-3). We conduct exper- iments on the subset of LFQA questions where we have human annotated pairwise comparisons between Human and Machine-generated (GPT-3 text-davinci-002) answers. Table 7 shows the win rates of GPT-3 judged by humans and three LLMs. We report their initial Reviewers GPT-3 Initial Preference After Discussion Human GPT-3.5 Claude GPT-4 72.46% 63.81% 55.50% 58.67% 62.22% 60.28% 58.75%
2307.02762#31
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
31
Abdelali et al. [1] Ahuja et al. [2] Bian et al. [9] Bang et al. [6] Bai et al. [5] Chen et al. [20] Choi et al. [23] Chia et al. [22] Frieder et al. [45] Fu et al. [47] Gekhman et al. [55] Gendron et al. [56] Honovich et al. [74] Jiang et al. [86] Lai et al. [100] ✓ Laskar et al. [102] Lopez-Lira and Tang [129] ✓ ✓ Liang et al. [114] Lee et al. [105] Lin and Chen [121] Liévin et al. [117] Liu et al. [124] Lyu et al. [130] Manakul et al. [133] Min et al. [138] Orrù et al. [147] Pan et al. [151] Peña et al. [154] Pu and Demberg [158] Pezeshkpour [156] Qin et al. [159] Riccardi and Desai [166] Saparov et al. [170] Tao et al. [184] Wang et al. [208] Wang
2307.03109#31
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
31
20 Total Retrieved Documents (~4K tokens) 30 Total Retrieved Documents (~6K tokens) 10 Total Retrieved Documents (~2K tokens) Accuracy a g Accuracy 2 g Ast 5th 10th Position of Document with the Answer Ast 5th Position of Document with the Answer 10th 15th 20th Ist 5th 10th 15th 20th 25th 30th Position of Document with the Answer =@= mpt-30b-instruct y= longchat-13b-16k =@= flan-t5-xxl = =®= flan-ul2 Figure 8: When encoder-decoder models (Flan-UL2 and Flan-T5-XXL) evaluated on sequences that are shorter than their encoder’s training-time maximum sequence length (2048 and 512 tokens, respectively), they are relatively robust to changes in the position of relevant information within their input context (left subplot). In contrast, when these models are evaluated on sequences longer than those seen during training (center and right subplots), we observe a U-shaped performance curve—performance is higher when relevant information occurs at the beginning or end of the input context, as opposed to the middle of the input context. 20 Total Retrieved Documents (~4K tokens, query-aware contextualization)
2307.03172#31
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
32
Table 7: GPT-3 answer win rates judged by different reviewers on LFQA. For all LLM reviewers, we take the average accuracy of all discussions they participate in. Self-enhancement exists and is mitigated by PD. # Prefer2nd |) Prefer 1st 100% 75% 25% ye et ye" soe roe . so? Figure 5: The position bias of all three LLMs’ initial and after peer discussion reviews. Human has an equiv- alent preference for either position (dotted line). and after-discussion preferences. Both GPT-3.5 and Claude highly prefer GPT-3’s answers in their initial reviews. Specifically, GPT-3.5 significantly favors GPT-3 answers with a 13.79% higher win rate. After discussing with other LLMs, all models align better with humans. Before discussions, GPT- 4’s initial preference aligns well with humans and is almost the same as humans after peer discussions. – which proves it’s not favoring GPT-3 much and is more fair.
2307.02762#32
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03172
32
20 Total Retrieved Documents (~4K tokens, query-aware contextualization) 80 >70 fo £ 5 zg £60 we 50 » © e--- 8 ~~~ ---97 1st 5th 10th 15th 20th Position of Document with the Answer =—@ claude-1.3 =@= gpt-3.5-turbo-16k-0613 =—@®- claude-1.3-100k = mpt-30b-instruct —®- gpt-3.5-turbo-0613 = longchat-13b-16k formance on the 75, 140, and 300 key-value pair settings. For example, GPT-3.5-Turbo (16K) with query-aware contextualization achieves perfect per- formance when evaluated with 300 key-value pairs. In contrast, without query-aware contextualiza- tion, the worst-case performance is 45.6% (Fig- ure 7). Despite the significant impact on key- value retrieval performance, query-aware contextu- alization minimally affects performance trends in the multi-document question answering task (Fig- ure 9); it slightly improves performance when the relevant information is located at the very begin- ning of the input context, but slightly decreases performance in other settings.
2307.03172#32
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
33
Position bias can be mitigated by Peer Discus- sions Human-annotated pairwise comparisons are not affected by the position of answers. As in- dicated by recent work of Wang et al. (2023a) and Dettmers et al. (2023), LLMs are prone to position bias, describing that LLMs tend to show a prefer- ence for specific positions, even when prompted not to do so (Table 10 in Appendix). In Table 8, the win rate of GPT-3 is highly affected by its po- sition when models generate initial reviews. GPT- 3.5 highly prefers the answer in the first position compared to Claude and GPT-4. The GPT-3 win rate calculated by GPT-3.5 is 15.79% higher than the win rate based on human-annotated pairwise claude 043 04 Figure 6: Pairwise win rate heatmaps: Fraction of Model A Wins For All A vs. B Battles (A: rows, B: columns). Left: GPT-4 evaluator; Middle: our method All (weighted); Right: Chatbot Arena pairwise win rate.
2307.02762#33
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03172
33
Figure 9: Query-aware contextualization (placing the query before and after the documents) does not sub- stantially improve robustness of language models to changing the position of relevant information in multi- document QA; performance slightly increases when relevant information occurs at the very beginning, but otherwise slightly decreases. of the prompt and decoder-only models can only attend to prior tokens at each timestep. In contrast, encoder-decoder models (which seem more robust to changes in the position of relevant information; §4.1) use a bidirectional encoder to contextualize input contexts—can we use this observation to im- prove decoder-only models by placing the query be- fore and after the data, enabling query-aware con- textualization of documents (or key-value pairs)? # 4.3 Effect of Instruction Fine-Tuning
2307.03172#33
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]
2307.02762
34
Reviewers Initial Preference After Discussion GPT-3 First Human First GPT-3 First Human First Human GPT-3.5 Claude GPT-4 57.89% 73.68% 63.16% 54.51% 59.46% 59.46% 64.41% 56.37% 57.89% 67.11% 55.70% 58.27% 59.46% 58.56% 55.41% 58.30% Table 8: GPT-3 answer win rate (in the GPT-3 vs Hu- man battles). comparisons, when GPT-3 appears first (73.68 vs 57.89). After peer discussion, all models have closer preferences to humans. Second, all LLMs’ scores for GPT-3 of both positions are closer as well. These imply that the position bias is miti- gated after peer discussions. From another perspective, Figure 5 shows the global preference of selecting answers at the first or second positions across different LLM review- ers. Overall, GPT-3.5 prefers answers at the first position. The other two models favor answers in the second position, which is similar to the position bias shown in Table 8. After peer discussion, it shows the same trend of mitigating position bias as well. # OA (Opinion Altering) total
2307.02762#34
PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations
Nowadays, the quality of responses generated by different modern large language models (LLMs) are hard to evaluate and compare automatically. Recent studies suggest and predominantly use LLMs as a reference-free metric for open-ended question answering. More specifically, they use the recognized "strongest" LLM as the evaluator, which conducts pairwise comparisons of candidate models' answers and provides a ranking score. However, this intuitive method has multiple problems, such as bringing in self-enhancement (favoring its own answers) and positional bias. We draw insights and lessons from the educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that takes into account each peer LLM's pairwise preferences of all answer pairs, and outputs a final ranking of models; and (2) peer discussion (PD), where we prompt two LLMs to discuss and try to reach a mutual agreement on preferences of two answers. We conduct experiments on two benchmark datasets. We find that our approaches achieve higher accuracy and align better with human judgments, respectively. Interestingly, PR can induce a relatively accurate self-ranking of models under the anonymous setting, where each model's name is unrevealed. Our work provides space to explore evaluating models that are hard to compare for humans.
http://arxiv.org/pdf/2307.02762
Ruosen Li, Teerth Patel, Xinya Du
cs.CL, cs.AI
null
null
cs.CL
20230706
20230706
[ { "id": "1803.05457" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2306.04181" }, { "id": "2302.04166" }, { "id": "2112.00861" }, { "id": "2305.14314" }, { "id": "2211.09110" }, { "id": "1904.09675" }, { "id": "2305.14627" }, { "id": "2305.11206" }, { "id": "2305.10142" }, { "id": "2303.17760" }, { "id": "2305.14387" }, { "id": "2303.16634" } ]
2307.03109
34
✓ ✓ J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. A Survey on Evaluation of Large Language Models performance in other relation types is comparatively weaker. In prediction tasks, LLMs exhibit enhanced predictive capabilities for future events with increased contextual information. Riccardi and Desai [166] explored the semantic proficiency of LLMs and showed that these models perform poorly in evaluating basic phrases. Furthermore, GPT-3.5 and Bard cannot distinguish between meaningful and nonsense phrases, consistently classifying highly nonsense phrases as meaningful. GPT-4 shows significant improvements, but its performance is still significantly lower than that of humans. In summary, the performance of LLMs in semantic understanding tasks is poor. In the future, we can start from this aspect and focus on improving its performance on this application. In social knowledge understanding, Choi et al. [23] evaluated how well models perform at learning and recognizing concepts of social knowledge and the results revealed that despite being much smaller in the number of parameters, finetuning supervised models such as BERT lead to much better performance than zero-shot models using state-of-the-art LLMs, such as GPT [162], GPT-J-6B [202] and so on. This statement demonstrates that supervised models significantly outperform zero-shot models in terms of performance, highlighting that an increase in parameters does not necessarily guarantee a higher level of social knowledge in this particular scenario.
2307.03109#34
A Survey on Evaluation of Large Language Models
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas. Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey.
http://arxiv.org/pdf/2307.03109
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
cs.CL, cs.AI
Accepted by ACM Transactions on Intelligent Systems and Technology (TIST); 45 pages; More recent works; https://llm-eval.github.io/
null
cs.CL
20230706
20231229
[ { "id": "2212.13138" }, { "id": "2305.14693" }, { "id": "2108.07258" }, { "id": "2309.10691" }, { "id": "2306.09212" }, { "id": "2308.08833" }, { "id": "2304.00228" }, { "id": "2303.02155" }, { "id": "2310.02174" }, { "id": "2305.15771" }, { "id": "2104.14337" }, { "id": "2305.10355" }, { "id": "2305.10263" }, { "id": "2306.04757" }, { "id": "2307.00184" }, { "id": "2205.01068" }, { "id": "2304.06364" }, { "id": "2305.13788" }, { "id": "2305.02182" }, { "id": "2304.01457" }, { "id": "2305.07609" }, { "id": "2305.17306" }, { "id": "2304.09542" }, { "id": "2305.14982" }, { "id": "2206.04615" }, { "id": "2306.02408" }, { "id": "2306.01337" }, { "id": "2306.01590" }, { "id": "2305.03514" }, { "id": "2304.03738" }, { "id": "2303.13835" }, { "id": "2306.02864" }, { "id": "2303.12712" }, { "id": "2306.04504" }, { "id": "2206.10498" }, { "id": "2105.09938" }, { "id": "2304.07333" }, { "id": "2307.00112" }, { "id": "2305.13711" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2306.07799" }, { "id": "2301.12307" }, { "id": "2307.01135" }, { "id": "2306.04618" }, { "id": "2305.11700" }, { "id": "2306.05179" }, { "id": "2306.07075" }, { "id": "2305.19555" }, { "id": "2301.01768" }, { "id": "2304.07619" }, { "id": "2305.15269" }, { "id": "2304.02210" }, { "id": "2009.03300" }, { "id": "2305.16151" }, { "id": "2306.13394" }, { "id": "2306.04926" }, { "id": "2305.18486" }, { "id": "2304.08244" }, { "id": "2301.13867" }, { "id": "2008.02275" }, { "id": "2301.12868" }, { "id": "2305.09645" }, { "id": "2211.09110" }, { "id": "2310.20499" }, { "id": "2303.09038" }, { "id": "2305.16837" }, { "id": "2308.02490" }, { "id": "2306.11698" }, { "id": "2302.14045" }, { "id": "2308.03656" }, { "id": "2306.11507" }, { "id": "2304.02015" }, { "id": "2306.01499" }, { "id": "1910.13461" }, { "id": "1910.14599" }, { "id": "2306.09296" }, { "id": "2210.07197" }, { "id": "2309.07915" }, { "id": "2005.04118" }, { "id": "2306.04610" }, { "id": "2305.14387" }, { "id": "2306.02549" }, { "id": "2304.04339" }, { "id": "2305.11171" }, { "id": "2211.08073" }, { "id": "2305.15074" }, { "id": "2301.11596" }, { "id": "2303.17580" }, { "id": "2309.11998" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.16636" }, { "id": "2304.01938" }, { "id": "2302.12297" }, { "id": "2308.01862" }, { "id": "2103.06268" }, { "id": "2302.13971" }, { "id": "2209.12106" }, { "id": "2304.05613" }, { "id": "2207.08143" }, { "id": "2306.08997" }, { "id": "2111.02840" }, { "id": "2305.15005" }, { "id": "2303.12528" }, { "id": "1707.06875" }, { "id": "2305.01210" }, { "id": "2201.11990" }, { "id": "2305.14938" }, { "id": "2306.06331" }, { "id": "2305.08322" }, { "id": "2306.09841" }, { "id": "2307.09042" }, { "id": "2306.04563" }, { "id": "2307.06281" }, { "id": "2306.10512" }, { "id": "2306.13651" }, { "id": "2304.08354" }, { "id": "2306.04181" }, { "id": "2309.05922" }, { "id": "2310.03214" }, { "id": "2306.05087" }, { "id": "2306.06687" }, { "id": "2303.18223" }, { "id": "1904.09675" }, { "id": "2205.00445" }, { "id": "2311.15296" }, { "id": "2306.09265" }, { "id": "2302.04023" }, { "id": "2307.16125" }, { "id": "2205.12255" }, { "id": "2305.17926" }, { "id": "2306.04528" }, { "id": "2307.16789" }, { "id": "2303.16421" }, { "id": "2304.00723" }, { "id": "2306.07622" }, { "id": "2309.07045" }, { "id": "2212.02774" }, { "id": "2109.07958" }, { "id": "2306.06264" }, { "id": "2303.12057" }, { "id": "2306.01694" }, { "id": "2204.01906" }, { "id": "2302.06476" }, { "id": "2307.02046" }, { "id": "2305.14251" }, { "id": "2306.04308" }, { "id": "2204.02311" }, { "id": "1810.04805" }, { "id": "2305.12421" }, { "id": "2304.03439" }, { "id": "2306.14565" }, { "id": "2305.16934" }, { "id": "2309.09150" }, { "id": "2309.12284" }, { "id": "2206.07682" }, { "id": "2304.05335" }, { "id": "2107.03374" }, { "id": "2306.15261" }, { "id": "2305.11792" }, { "id": "2307.09705" }, { "id": "2211.01910" }, { "id": "2301.12867" }, { "id": "2303.08774" }, { "id": "2109.00859" }, { "id": "2203.13474" }, { "id": "2306.03090" }, { "id": "2012.15723" }, { "id": "2305.18365" }, { "id": "2307.04657" }, { "id": "2111.08181" }, { "id": "2104.08663" }, { "id": "2305.01181" }, { "id": "2112.00861" }, { "id": "2303.08896" }, { "id": "2305.15268" }, { "id": "2305.14975" }, { "id": "1804.07461" }, { "id": "2309.11737" }, { "id": "2304.01852" }, { "id": "2309.01219" }, { "id": "2306.05685" }, { "id": "2306.05783" }, { "id": "2201.08239" }, { "id": "2307.13692" }, { "id": "2307.02477" }, { "id": "2306.05715" }, { "id": "2302.11382" }, { "id": "2305.11262" }, { "id": "2306.01248" }, { "id": "2204.04991" }, { "id": "2306.08302" } ]
2307.03172
34
# 4.3 Effect of Instruction Fine-Tuning The models we evaluated are all instruction fine- tuned—after their initial pre-training, they undergo supervised fine-tuning on a dataset of instructions and responses. The task specification and/or in- struction is commonly placed at the beginning of the input context in supervised instruction fine- tuning data, which might lead instruction fine- tuned language models to place more weight on the start of the input context. To better understand the potential effects of instruction fine-tuning on how language models use long input contexts, we compare the multi-document question answering performance of MPT-30B-Instruct against its base model (i.e., before instruction fine-tuning) MPT- 30B. We use the same experimental setup as §2. We find that query-aware contextualization dra- matically improves performance on the key-value retrieval task—all models achieve near-perfect perFigure 10 compares the multi-document QA performance of MPT-30B and MPT-30B-Instruct as a function of the position of the relevant in20 Total Retrieved Documents (~4K tokens) 56 54 Accuracy - Sb s u ul + a fos) oO N 1st 5th 10th Position of Document with the Answer 15th 20th == mpt-30b =@®= mpt-30b-instruct
2307.03172#34
Lost in the Middle: How Language Models Use Long Contexts
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts: multi-document question answering and key-value retrieval. We find that performance can degrade significantly when changing the position of relevant information, indicating that current language models do not robustly make use of information in long input contexts. In particular, we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models. Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
http://arxiv.org/pdf/2307.03172
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang
cs.CL
18 pages, 16 figures. Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2023
null
cs.CL
20230706
20231120
[ { "id": "2302.13971" }, { "id": "2004.05150" }, { "id": "2006.04768" }, { "id": "2201.08239" }, { "id": "2205.14135" }, { "id": "2306.13421" }, { "id": "2302.00083" }, { "id": "2211.08411" }, { "id": "2305.14196" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2301.12652" }, { "id": "2205.05131" }, { "id": "2208.03188" } ]