doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.02762 | 35 | # OA (Opinion Altering) total
# Opinion Holding (OH) total
50 40 30 20 ; it al 0 eS ) yo? oe x sosâ oo & we ss oe Nes oo oo ge
Figure 7: The discussion bias of all three models at the leading and following position.
Arena ranking4 is based on user queries and their corresponding preferences for two responses. The ï¬gure demonstrates that although both approaches favor GPT-4 and Claude answers, the win rates cal- culated by our approach All (weighted) correlate better with Arena win rate, especially on weaker models.
# 4 Further Analysis
In this section, we provide a more comprehensive analysis of our methods.
Pairwise win rate of models Previously in Ta- ble 2, we presented the global win rate correlation with human ratings on Vicuna80. In Figure 6, we present the more detailed version of pairwise win rates between every two contestants (LLMs). We compare our evaluation with GPT-4 based evalua- tion, as well as the Chatbot Arena leaderboard. The | 2307.02762#35 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 35 | 3.1.2 Reasoning. The task of reasoning poses significant challenges for an intelligent AI model. To effectively tackle reasoning tasks, the models need to not only comprehend the provided information but also utilize reasoning and inference to deduce answers when explicit responses are absent. Table 2 reveals that there is a growing interest in evaluating the reasoning ability of LLMs, as evidenced by the increasing number of articles focusing on exploring this aspect. Currently, the evaluation of reasoning tasks can be broadly categorized into mathematical reasoning, commonsense reasoning, logical reasoning, and domain-specific reasoning. | 2307.03109#35 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 35 | Figure 10: Multi-document QA performance of MPT- 30B-Instruct compared against its base model (i.e., be- fore instruction fine-tuning) MPT-30B. Both models have a U-shaped performance curve, where performance is much higher when relevant information occurs at the start or end of the input context, indicating that the instruction fine-tuning process itself is not necessarily responsible for these performance trends.
formation in the input context. Surprisingly, we see that both MPT-30B and MPT-30B-Instruct ex- hibit a U-shaped performance curve, where perfor- mance is highest when relevant information occurs at the very beginning or very end of the context. Although the absolute performance of MPT-30B- Instruct is uniformly higher than that of MPT-30B, their overall performance trends are similar. We also observe that instruction fine-tuning slightly re- duces the worst-case performance disparity from nearly 10% between the base model best- and worst-case performance to around 4%. | 2307.03172#35 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 36 | The reviewer who leads the discussion tends to hold its opinion. In a discussion between two reviewers, we deï¬ne the reviewer who leads the discussion as the leader and the other reviewer as the follower. We ï¬nd leaders are less likely to be convinced by followers when it insists on its own opinion at the ï¬rst turn. We name it âDiscussion Ordering Biasâ. We observe this bias in discussions over the LFQA questions.
We deï¬ne two phenomenons which may happen
4https://lmsys.org/blog/ 2023-05-25-leaderboard/
during the discussions: (1) Opinion altering (OA): a reviewer changing its opinion after discussing with another reviewer. For example, R2 posts its preference at turn 2, which is different from R1âs preference at turn 1, then R1 changes its preference at turn 3 that agrees with R2; (2) Opinion holding (OH): a reviewer does not change its opinion even if another reviewer disagrees. For example, R1 posts its preference at turn 1 while R2 disagrees with R1 at turn 2; then, R1 still holds its preference at turn 3. | 2307.02762#36 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 36 | ChatGPT exhibits a strong capability for arithmetic reasoning by outperforming GPT-3.5 in the majority of tasks [159]. However, its proficiency in mathematical reasoning still requires improvement [6, 45, 265]. On symbolic reasoning tasks, ChatGPT is mostly worse than GPT-3.5, which may be because ChatGPT is prone to uncertain responses, leading to poor performance [6]. Through the poor performance of LLMs on task variants of counterfactual conditions, Wu et al. [227] showed that the current LLMs have certain limitations in abstract reasoning ability. On abstract reasoning, Gendron et al. [56] found that existing LLMs have very limited ability. In logical reasoning, Liu et al. [124] indicated that ChatGPT and GPT-4 outperform traditional fine-tuning methods on most benchmarks, demonstrating their superiority in logical reasoning. However, both models face challenges when handling new and out-of-distribution data. ChatGPT does not perform as well as other LLMs, including GPT-3.5 and BARD [159, 229]. This is because ChatGPT is designed explicitly for chatting, so it does an excellent job of maintaining | 2307.03109#36 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 36 | These observations complement prior work, which found that non-instruction fine-tuned lan- guage models are biased towards recent tokens (i.e., the end of the input context; Khandelwal et al., 2018; Press et al., 2021). This recency bias has been observed in past work when evaluating mod- els on next-word prediction of contiguous text, a setting where language models minimally benefit from long-range information (Sun et al., 2021). In contrast, our results show that language models are capable of using longer-range information (i.e., the beginning of the input context) when prompted with instruction-formatted data. We hypothesize that non-instruction fine-tuned language models learn to use these long contexts from similarly- formatted data that may occur in Internet text seen during pre-training, e.g., StackOverflow questions
and answers. | 2307.03172#36 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 37 | As shown in Figure 7, all models have OA when they are in the followerâs position, while their num- ber of OA decreases signiï¬cantly after they switch to the leader position. This implies that discus- sion ordering bias exists in discussions. On the pairwise comparisons of LFQA where two review- ers initially disagree:when in the leader position, GPT-4 has zero OA, and Claude has two OAs (hap- pens during the discussions with GPT-3.5). When GPT-4 discusses with Claude, both of them hold their initial preferences when they are in the leader position.
Stronger LLMs tend to hold their opinions As from Figure 7, we add up the green mass (OH total) for each LLM reviewer to obtain their OH cases in both orderings. We see that models that are commonly recognized as being stronger (e.g. GPT-4) are more ï¬rm in their reviews and hold their opinions. For example, GPT-3.5 changes its opinion most often, and GPT-4 usually holds its opinion. More speciï¬cally, GPT-4 holds its opinion in 174 discussions, while Claude and GPT-3.5 hold only in 94 and 76 discussions, respectively. | 2307.02762#37 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 37 | GPT-3.5 and BARD [159, 229]. This is because ChatGPT is designed explicitly for chatting, so it does an excellent job of maintaining rationality. FLAN-T5, LLaMA, GPT-3.5, and PaLM perform well in general deductive reasoning tasks [170]. GPT-3.5 is not good at keeping oriented for reasoning in the inductive setting [229]. For multi-step reasoning, Fu et al. [47] showed PaLM and Claude2 are the only two model families that achieve similar performance (but still worse than the GPT model family). Moreover, LLaMA-65B is the most robust open-source LLMs to date, which performs closely to code-davinci-002. Some papers separately evaluate the performance of ChatGPT on some reasoning tasks: ChatGPT generally performs poorly on commonsense reasoning tasks, but relatively better than non-text semantic reasoning [6]. Meanwhile, ChatGPT also lacks spatial reasoning ability, but exhibits better temporal reasoning. Finally, while the performance of ChatGPT is acceptable on causal and analogical reasoning, it performs poorly on multi-hop reasoning ability, which is similar to the weakness of other | 2307.03109#37 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 37 | and answers.
To better understand the effect of additional fine- tuning and model scale, we also experimented with Llama-2 models of varying sizes (7B, 13B, and 70B) with and without additional supervised fine-tuning and reinforcement learning from hu- man feedback (Appendix E). We find that the U- shaped performance curve only appears in suffi- ciently large language models (with or without ad- ditional fine-tuning)âthe 7B Llama-2 models are solely recency biased, while the 13B and 70B mod- els exhibit a U-shaped performance curve. In addi- tion, we see that the Llama-2 supervised fine-tuning and reinforcement learning from human feedback procedure slightly mitigates the positional bias in smaller models (13B, akin to trends shown when comparing MPT-30B and MPT-30B-Instruct), but minimally affects trends on larger models (70B).
5
# Is More Context Is Always Better? A Case Study With Open-Domain QA | 2307.03172#37 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 38 | Post-agreement opinion altering is when two reviewers reach a mutual agreement ï¬rst, and one reviewer regret and suddenly changes its preference in a later turn. For example, R1 and R2 agree upon answer 1 after two turns, while R1 changes its preference to answer 2 in the third turn. As shown in Table 9, GPT-4 makes this type of change the least and GPT-3.5 the most. This potentially shows that models with a lower capability are not ï¬rm about their opinions.
# 5 Related Work
Automatic Evaluations NLG evaluation meth- ods are mainly of a similarity-based or reference- free type. For similarity-based metrics, the gen- erated texts are compared to reference text. They
Reviewers Opinion Altering Leading Following GPT-3.5 Claude GPT-4 37 11 1 31 10 5
Table 9: Post-agreement opinion altering (OA). | 2307.02762#38 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 38 | the performance of ChatGPT is acceptable on causal and analogical reasoning, it performs poorly on multi-hop reasoning ability, which is similar to the weakness of other LLMs on complex reasoning [148]. In professional domain reasoning tasks, zero-shot InstructGPT and Codex are capable of complex medical reasoning tasks, but still need to be further improved [117]. In terms of language insight issues, Orrù et al. [147] demonstrated the potential of ChatGPT for | 2307.03109#38 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 38 | 5
# Is More Context Is Always Better? A Case Study With Open-Domain QA
Our results indicate that prompting language mod- els with longer input contexts is a trade-offâ providing the language model with more informa- tion may help it perform the downstream task, but it also increases the amount of content that the model must reason over, potentially decreasing accuracy. Even if a language model can take in 16K tokens, is it actually beneficial to provide 16K tokens of context? The answer to this question is ultimately downstream task-specific since it de- pends on the marginal value of the added context and the modelâs ability to effectively use long input contexts, but we perform a case study with open- domain question answering on NaturalQuestions- Open to better understand this trade-off in existing language models. | 2307.03172#38 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 39 | Reviewers Opinion Altering Leading Following GPT-3.5 Claude GPT-4 37 11 1 31 10 5
Table 9: Post-agreement opinion altering (OA).
can be divided into lexical overlap-based (Papineni et al., 2002; Lin, 2004; Banerjee and Lavie, 2005) and contextualized embedding-based (Zhang et al., In parallel, people have also 2019) evaluators. developed task-speciï¬c metrics such as consis- tency (Kry´sci´nski et al., 2020; Wang et al., 2020), faithfulness (Fabbri et al., 2022; Gao et al., 2023) and coherence (Dziri et al., 2019). This is simi- lar to our peer discussion idea on designing more speciï¬c prompts for large language model-based evaluations. Our prompting-based method is more ï¬exible and can act as a uniï¬ed evaluator (Zhong et al., 2022). | 2307.02762#39 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 39 | J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
111:9
111:10
Chang et al.
solving verbal insight problems, as ChatGPTâs performance was comparable to that of human participants. It should be noted that most of the above conclusions are obtained for specific data sets. In contrast, more complex tasks have become the mainstream benchmarks for assessing the capabilities of LLMs. These include tasks such as mathematical reasoning [226, 237, 244] and structured data inference [86, 151]. Overall, LLMs show great potential in reasoning and show a continuous improvement trend, but still face many challenges and limitations, requiring more in-depth research and optimization.
3.1.3 Natural language generation. NLG evaluates the capabilities of LLMs in generating specific texts, which consists of several tasks, including summarization, dialogue generation, machine translation, question answering, and other open-ended generation tasks. | 2307.03109#39 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 39 | We use language models in a standard retriever- reader setup. A retrieval system (Contriever, fine- tuned on MS-MARCO) takes an input query from NaturalQuestions-Open and returns the k docu- ments from Wikipedia with the highest relevance score. To condition language models on these re- trieved documents, we simply include them in the prompt. We evaluate retriever recall and reader accuracy (whether any of the annotated answers appear in the predicted output) as a function of the number of retrieved documents k. We use a subset of NaturalQuestions-Open where the long answer is a paragraph (as opposed to a table or a list).
Figure 11 presents retriever recall and openee) _-e--* 80 ro Y â g7o ¢ = 60 ââ) =O â-05 ==; wi --@-â@--â~¢--« 50 go wu 10 20 30 40 50 Number of Retrieved Docs =â@â claude-1.3 =@= mpt-30b-instruct =®- claude-1.3-100k @- longchat-13b-16k â®@- gpt-3.5-turbo-0613 =®- contriever recall =@= gpt-3.5-turbo-16k-0613 | 2307.03172#39 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 40 | open- ended question answering, early work uses ROUGE (Lin, 2004) to measure the similarity be- tween human and machine-generated answers. But researchers ï¬nd that ROUGE is not a fair metric for quality measurement due to the open-ended nature of long-form answers (Reiter and Belz, 2009; Krishna et al., 2021; Xu et al., 2023). Fu et al. (2023a) propose GPTScore, which evaluates texts with generative pre-training models like GPT-3. Xu et al. (2023) also implements a similar idea for evaluating long-form answers. Given a prompt consisting of a question with two answer candidates, GPT-3 is ï¬ne-tuned to output the label answer1 and answer2. Differing from above, it produces pairwise comparisons â preference scores. | 2307.02762#40 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 40 | Summarization is a generation task that aims to learn a concise abstract for the given sentence. In this evaluation, Liang et al. [114] found that TNLG v2 (530B) [179] achieved the highest score in both scenarios, followed by OPT (175B) [247] in second place. The fine-tuned Bart [106] is still better than zero-shot ChatGPT. Specifically, ChatGPT demonstrates comparable zero-shot performance to the text-davinci-002 [6], but performs worse than GPT-3.5 [159]. These findings indicate that LLMs, particularly ChatGPT, have a general performance in summarization tasks. | 2307.03109#40 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 40 | Figure 11: Retriever recall and model performance as a function of the number of retrieved documents. Model performance saturates long before retriever recall, indi- cating that the models have difficulty making use of the extra retrieved documents.
domain QA results. We see that reader model performance saturates long before retriever per- formance saturates, indicating that readers are not effectively using the extra context. Using more than 20 retrieved documents only marginally im- proves reader performance (â¼1.5% for GPT-3.5- Turbo and â¼1% for Claude-1.3), while significantly increasing the input context length (and thus la- tency and cost). These results, coupled with the observation that models are often better at retriev- ing and using information at the start or end of the input contexts, suggest that effective rerank- ing of retrieved documents (pushing relevant infor- mation closer to the start of the input context) or ranked list truncation (retrieving fewer documents when appropriate; Arampatzis et al., 2009) may be promising directions for improving how language- model-based readers use retrieved context.
# 6 Related Work
# 6.1 Long-Context Language Models | 2307.03172#40 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 41 | LLMs as evaluators: problems and challenges Most recently, with the trend of developing open- source LLMs, evaluations for benchmarking the progress have become even more important but also more difï¬cult. Apart from testing on standard datasets such as MMLU (Hendrycks et al., 2020), they are often tested on open-ended questions, which are much more prevalent in real life (Nakano et al., 2021; Chiang et al., 2023). People mostly use the recently recognized strongest LLM such as GPT-4 (Liu et al., 2023; OpenAI, 2023) as an evaluator for either generating scores or pairwise comparisons (Dettmers et al., 2023; Wang et al.,
2023b; Zhou et al., 2023). However, such a strategy has fundamental problems mainly because of var- ious biases, such as (1) positional bias (Dettmers et al., 2023; Wang et al., 2023a), where a model favors the ï¬rst answer in pairwise comparisons; (2) verbosity and length bias (Zheng et al., 2023; Wang et al., 2023b); (3) and most importantly â the favoring of the LLMâs own answers. (Liu et al., 2023; Zheng et al., 2023) | 2307.02762#41 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 41 | Evaluating the performance of LLMs on dialogue tasks is crucial to the development of dialogue systems and improving human-computer interaction. Through such evaluation, the natural language processing ability, context understanding ability and generation ability of the model can be improved, so as to realize a more intelligent and more natural dialogue system. Both Claude and ChatGPT generally achieve better performance across all dimensions when compared to GPT-3.5 [121, 159]. When comparing the Claude and ChatGPT models, both models demonstrate competitive performance across different evaluation dimensions, with Claude slightly outperforming ChatGPT in specific configurations. Research by Bang et al. [6] underscores that fully fine-tuned models tailored for specific tasks surpass ChatGPT in both task-oriented and knowledge-based dialogue contexts. Additionally, Zheng et al. [259] have curated a comprehensive LLMs conversation dataset, LMSYS-Chat-1M, encompassing up to one million samples. This dataset serves as a valuable resource for evaluating and advancing dialogue systems. | 2307.03109#41 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 41 | # 6 Related Work
# 6.1 Long-Context Language Models
There is much prior work in designing performant language models with cheaper scaling than Trans- formers in the context length. Many lines of work pursue Transformer variants with attention modi- fications like recurrence (Dai et al., 2019), factor- izing attention into computationally less intensive approximations (Beltagy et al., 2020; Zaheer et al., 2020), or low-rank approximations (Wang et al., 2020; Peng et al., 2021). Dao et al. (2022) in- stead provide a faster exact attention by a carefullycrafted IO-aware CUDA kernel. Separately, there are attempts to do away with attention entirely to remove quadratic sequence length complexity, of- ten through convolution and/or linear RNNs, e.g., in RWKV (Peng, 2023), S4 (Gu et al., 2022), or Hyena (Poli et al., 2023). Many prior efforts evalu- ate perplexity on a diverse web corpus as a proxy for the ability to process long contexts; this work shows that precise knowledge access on long con- texts may be an added challenge.
# 6.2 How Do Language Models Use Context? | 2307.03172#41 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 42 | Efforts have been proposed to tackle them: (1) Using position switching (Wang et al., 2023a; Dettmers et al., 2023) for mitigating positional bias; (2) Zheng et al. (2023) proposes Chatbot Arena, where real users ask questions and provide pairwise judgments (win, lose, or tie) of answers generated by two different LLMs. But this is time- consuming and costly to ensure fairness â requiring expert-level annotations of pair comparisons. (3) Concurrent to our work, Bai et al. (2023) propose using each language model as an examiner, where each LLM generates questions to test other models. Different from peer evaluation, their âexamsâ are decentralized and biased with randomly generated questions. Moreover, all of the above works do not support inducing self-rankings through peer ranking.
# 6 Conclusion | 2307.02762#42 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 42 | While LLMs are not explicitly trained for translation tasks, they can still demonstrate strong performance. Wang et al. [208] demonstrated that ChatGPT and GPT-4 exhibit superior perfor- mance in comparison to commercial machine translation (MT) systems, as evaluated by humans. Additionally, they outperform most document-level NMT methods in terms of sacreBLEU scores. During contrastive testing, ChatGPT shows lower accuracy in comparison to traditional translation models. However, GPT-4 demonstrates a robust capability in explaining discourse knowledge, even though it may occasionally select incorrect translation candidates. The findings from Bang et al. [6] indicated that ChatGPT performs X â Eng translation well, but it still lacks the ability to perform Eng â X translation. Lyu et al. [130] investigated several research directions in MT utilizing LLMs. This study significantly contributes to the advancement of MT research and highlights the potential of LLMs in enhancing translation capabilities. In summary, while LLMs perform satisfactorily in several translation tasks, there is still room for improvement, e.g., enhancing the translation capability from English to non-English languages. | 2307.03109#42 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 42 | The pioneering work of Khandelwal et al. (2018) showed that small LSTM language models make increasingly coarse use of longer-term context; Sankar et al. (2019) found similar results in di- alogue models. In a similar vein, Daniluk et al. (2017) find that attentive LSTM language mod- els tend to mainly use recent history. Petroni et al. (2020) were among the first to demonstrate the potential of combining context from an in- formation retrieval system with a pretrained lan- guage models for unsupervised question answering. OâConnor and Andreas (2021) found that many information-destroying operations had marginal ef- fects on Transformer LMsâ predictions. Krishna et al. (2022) found that long-context neural gen- eration in modestly-sized Transformer language models degenerates because models fail to prop- erly condition on long context. Finally, studying long-context models, Sun et al. (2021) found that longer contexts improves prediction of only a few tokens, an empirical finding consistent with the theory of Sharan et al. (2018), who showed that sequence distributions with bounded mutual infor- mation | 2307.03172#42 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 43 | # 6 Conclusion
Our method provides promising prospects of us- ing a peer evaluation method to improve LLM- based evaluations, with the goal to mitigate poten- tial bias (e.g. self-enhancement, positional) in pre- vious prevalent methods. Our proposed peer rank process provides a more fair ranking of model capa- bilities. The peer discussion process helps models to reach mutual agreements that correlate with hu- man preference more, at the same time helping weaker models to learn from stronger modelsâ re- views. In the future, we plan to investigate how the general peer evaluation process beneï¬t the LLMs in learning to access their own answer and answer new questions (Nicol et al., 2014).
# Limitations
During evaluations, our method requires GPU re- sources or Cloud API services, for both peer dis- cussion and peer ranks.
# Acknowledgements
We thank Barry Wang for helping with Figures 1&2. We thank Jialu Li and Yunmo Chen for providing
editing suggestions on the paper draft. We thank Artidoro Pagnoni for sharing part of the human ratings.
# References | 2307.02762#43 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 43 | Question answering is a crucial technology in the field of human-computer interaction, and it has found wide application in scenarios like search engines, intelligent customer service, and QA systems. The measurement of accuracy and efficiency in QA models will have significant implications for these applications. According to Liang et al. [114], among all the evaluated models, InstructGPT davinci v2 (175B) exhibited the highest performance in terms of accuracy, robustness,
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
A Survey on Evaluation of Large Language Models | 2307.03109#43 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 43 | an empirical finding consistent with the theory of Sharan et al. (2018), who showed that sequence distributions with bounded mutual infor- mation necessarily lead to marginal average predic- tion benefits from increasingly long context. Qin et al. (2023) analyze how efficient Transformers perform on a variety of long-context downstream NLP tasks, finding that long-context transformers are recency-biased and do not effectively use long- range context. | 2307.03172#43 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 44 | editing suggestions on the paper draft. We thank Artidoro Pagnoni for sharing part of the human ratings.
# References
Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. 2021. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861.
Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, et al. 2023. Benchmarking foundation models with language-model-as-an-examiner. arXiv preprint arXiv:2306.04181.
Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evalu- ation measures for machine translation and/or sum- marization, pages 65â72. | 2307.02762#44 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 44 | J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
A Survey on Evaluation of Large Language Models
and fairness across the 9 QA scenarios. Both GPT-3.5 and ChatGPT demonstrate significant advance- ments compared to GPT-3 in their ability to answer general knowledge questions. In most domains, ChatGPT surpasses GPT-3.5 by more than 2% in terms of performance [9, 159]. However, ChatGPT performs slightly weaker than GPT-3.5 on the CommonsenseQA and Social IQA benchmarks. This can be attributed to ChatGPTâs cautious nature, as it tends to decline to provide an answer when there is insufficient information available. Fine-tuned models, such as VÃcuna and ChatGPT, exhibit exceptional performance with near-perfect scores, surpassing models that lack supervised fine-tuning by a significant margin [5, 6]. Laskar et al. [102] evaluated the effectiveness of ChatGPT on a range of academic datasets, including various tasks such as answering questions, summarizing text, generating code, reasoning with commonsense, solving math problems, translating languages, detecting bias, and addressing ethical issues. Overall, LLMs showcase flawless performance on QA tasks and hold the potential for further enhancing their proficiency in social, event, and temporal commonsense knowledge in the future. | 2307.03109#44 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 44 | # 6.3 The Serial-Position Effect
The U-shaped curve we observe in this work has a connection in psychology known as the serial- position effect (Ebbinghaus, 1913; Murdock Jr, 1962), that states that in free-association recall of elements from a list, humans tend to best re- member the first and last elements of the list. The serial-position effect plays a role in understanding how humans develop short- and long-term memory. Observing a serial-position-like effect in lan- guage models is perhaps surprising, since the self- attention mechanisms underlying Transformer lan- guage models is technically equally capable of re- trieving any token from their contexts.
# 7 Conclusion | 2307.03172#44 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 45 | Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, and et.al. 2023. Vicuna: An open-source chat- bot impressing gpt-4 with 90%* chatgpt quality, https://lmsys.org/blog/2023-03-30-vicuna/.
Kwangsu Cho and Charles MacArthur. 2011. Learning by reviewing. Journal of educational psychology, 103(1):73.
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question an- swering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457.
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efï¬cient arXiv preprint ï¬netuning of quantized llms. arXiv:2305.14314. | 2307.02762#45 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 45 | There are also other generation tasks to explore. In the field of sentence style transfer, Pu and Demberg [158] demonstrated that ChatGPT surpasses the previous SOTA supervised model through training on the same subset for few-shot learning, as evident from the higher BLEU score. However, when it comes to controlling the formality of sentence style, ChatGPTâs performance still differs significantly from human behavior. In writing tasks, Chia et al. [22] discovered that LLMs exhibit consistent performance across various categories such as informative, professional, argumentative, and creative writing. This finding implies that LLMs possess a general proficiency in writing capabilities. In text generation quality, Chen et al. [20] revealed that ChatGPT excels in assessing text quality from multiple angles, even in the absence of reference texts, surpassing the performance of most existing automated metrics. Employing ChatGPT to generate numerical scores for text quality emerged as the most reliable and effective approach among the various testing methods studied. | 2307.03109#45 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 45 | # 7 Conclusion
We empirically study how language models use long input contexts via a series of controlled ex- periments. We show that language model perfor- mance degrades significantly when changing the position of relevant information, indicating that models struggle to robustly access and use infor- mation in long input contexts. In particular, per- formance is often lowest when models must use information in the middle of long input contexts. We conduct a preliminary investigation of the role of (i) model architecture, (ii) query-aware contextu- alization, and (iii) instruction fine-tuning to better understand how they affect how language models use context. Finally, we conclude with a practi- cal case study of open-domain question answering, finding that the performance of language model readers saturates far before retriever recall. Our results and analysis provide a better understanding of how language models use their input context and provides new evaluation protocols for future long-context models.
# Acknowledgments | 2307.03172#45 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 46 | Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Al- pacafarm: A simulation framework for methods arXiv preprint that learn from human feedback. arXiv:2305.14387.
Nouha Dziri, Ehsan Kamalloo, Kory Mathewson, and Osmar R Zaiane. 2019. Evaluating coherence in di- alogue systems using entailment. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3806â3812.
Arpad E Elo. 1967. The proposed uscf rating system. its development, theory, and applications. Chess Life, 22(8):242â247.
Alexander Richard Fabbri, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. 2022. Qafacteval: Im- proved qa-based factual consistency evaluation for summarization. In Proceedings of the 2022 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 2587â2601. | 2307.02762#46 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 46 | 3.1.4 Multilingual tasks. While English is the predominant language, many LLMs are trained on mixed-language training data. The combination of multilingual data indeed helps LLMs gain the ability to process inputs and generate responses in different languages, making them widely adopted and accepted across the globe. However, due to the relatively recent emergence of this technology, LLMs are primarily evaluated on English data, leading to a potential oversight of evaluating their multilingual performance. To address this, several articles have provided comprehensive, open, and independent evaluations of LLMsâ performance on various NLP tasks in different non-English languages. These evaluations offer valuable insights for future research and applications. | 2307.03109#46 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 46 | # Acknowledgments
We would like to thank Luke Zettlemoyer, who served as our TACL action editor, and the the anonymous reviewers for their comments and feed- back. We also thank Claudiu Leoveanu-Condrei, Megan Leszczynski, Dmytro Okhonko, Maithra Raghu, Eric Wallace and Sang Michael Xie for feedback and discussions that helped improve this work. Further, we are grateful to Sewon Min for her help with the AmbigQA dataset. This work was supported by the Stanford Center for Research on Foundation Models (CRFM), by OpenAI via an API credits grant to the Stanford CRFM, and by Anthropic via the Claude academic access pro- gram.
# References
Avi Arampatzis, Jaap Kamps, and Stephen Robert- son. 2009. Where to stop reading a ranked list? threshold optimization using truncated score dis- tributions. In Proc. of SIGIR.
Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document trans- former. ArXiv:2004.05150. | 2307.03172#46 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 47 | Angela Fan, Yacine Jernite, Ethan Perez, David Grang- ier, Jason Weston, and Michael Auli. 2019. Eli5: In Proceedings of Long form question answering. the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3558â3567.
Joseph L Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological bulletin, 76(5):378.
Joseph L Fleiss, Bruce Levin, and Myunghee Cho Paik. 2013. Statistical methods for rates and proportions. john wiley & sons.
Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023a. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166.
Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. 2023b. Improving language model negotiation with self-play and in-context learning from ai feedback. arXiv preprint arXiv:2305.10142.
Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Enabling large language models arXiv preprint Chen. 2023. to generate text with citations. arXiv:2305.14627. | 2307.02762#47 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 47 | Abdelali et al. [1] evaluated the performance of ChatGPT in standard Arabic NLP tasks and observed that ChatGPT exhibits lower performance compared to SOTA models in the zero-shot setting for most tasks. Ahuja et al. [2], Bang et al. [6], Lai et al. [100], Zhang et al. [250] utilized a greater number of languages across multiple datasets, encompassing a wider range of tasks, and conducted a more comprehensive evaluation of LLMs, including BLOOM, Vicuna, Claude, ChatGPT, and GPT-4. The results indicated that these LLMs perform poorly when it came to non-Latin languages and languages with limited resources. Despite translating the input to English and using it as the query, generative LLMs still displays subpar performance across tasks and languages compared to SOTA models [2]. Furthermore, Bang et al. [6] highlighted that ChatGPT still faces a limitation in translating sentences written in non-Latin script languages with rich linguistic resources. The aforementioned demonstrates that there are numerous challenges and ample oppor- tunities for enhancement in multilingual tasks for LLMs. Future research should prioritize achieving multilingual balance and addressing the challenges faced by non-Latin languages and low-resource languages, with the aim of better supporting users worldwide. At the same time, attention should | 2307.03109#47 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 47 | Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document trans- former. ArXiv:2004.05150.
Hyung Won Chung, Le Hou, Shayne Longpre, Bar- ret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction- finetuned language models. ArXiv:2210.11416.
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language mod- els beyond a fixed-length context. In Proc. of ACL. | 2307.03172#47 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 48 | Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Stein- hardt. 2020. Measuring massive multitask lan- In International Conference guage understanding. on Learning Representations.
Karen Sparck Jones and Julia R Galliers. 1995. Evalu- ating natural language processing systems: An anal- ysis and review.
Kalpesh Krishna, Aurko Roy, and Mohit Iyyer. 2021. Hurdles to progress in long-form question answer- In Proceedings of the 2021 Conference of ing. the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 4940â4957, Online. Association for Computational Linguistics.
Wojciech Kry´sci´nski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332â9346.
Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. 2023. Camel: Communicative agents for" mind" exploration of large scale language model society. arXiv preprint arXiv:2303.17760. | 2307.02762#48 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 48 | J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
111:11
111:12
Chang et al.
be paid to the impartiality and neutrality of the language in order to mitigate any potential biases, including English bias or other biases, that could impact multilingual applications.
Factuality. Factuality in the context of LLMs refers to the extent to which the information 3.1.5 or answers provided by the model align with real-world truths and verifiable facts. Factuality in LLMs significantly impacts a variety of tasks and downstream applications, such as QA systems, information extraction, text summarization, dialogue systems, and automated fact-checking, where incorrect or inconsistent information could lead to substantial misunderstandings and misinter- pretations. Evaluating factuality is of great importance in order to trust and efficiently use these models. This includes the ability of these models to maintain consistency with known facts, avoid generating misleading or false information (known as âfactual hallucination"), and effectively learn and recall factual knowledge. A range of methodologies have been proposed to measure and improve the factuality of LLMs. | 2307.03109#48 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 48 | MichaŠDaniluk, Tim Rocktäschel, Johannes Welbl, and Sebastian Riedel. 2017. Frustratingly short attention spans in neural language modeling. In Proc. of ICLR.
Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. FlashAttention: Fast and memory-efficient exact attention with IO- awareness. ArXiv:2205.14135.
Hermann Ebbinghaus. 1913. Memory: A contribu- tion to experimental psychology. H. A. Ruger & C. E. Bussenius, Trans.
Albert Gu, Karan Goel, and Christopher Ré. 2022. Efficiently modeling long sequences with struc- tured state spaces. In Proc. of ICLR.
Maor Ivgi, Uri Shaham, and Jonathan Berant. 2023. Efficient long-text understanding with short-text models. Transactions of the Association for Computational Linguistics, 11:284â299.
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Unsupervised dense information retrieval with contrastive learning. ArXiv:2112.09118. | 2307.03172#48 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 49 | Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Ku- mar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110.
Chin-Yew Lin. 2004. Rouge: A package for automatic In Text summarization evaluation of summaries. branches out, pages 74â81.
Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. Gpteval: Nlg evaluation using gpt-4 with better human align- ment. arXiv preprint arXiv:2303.16634.
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. Webgpt: Browser-assisted question- arXiv preprint answering with human feedback. arXiv:2112.09332. | 2307.02762#49 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 49 | Wang et al. [204] assessed the internal knowledge capabilities of several large models, namely InstructGPT, ChatGPT-3.5, GPT-4, and BingChat [137], by examining their ability to answer open questions based on the Natural Questions [98] and TriviaQA [88] datasets. The evaluation process involved human assessment. The results of the study indicated that while GPT-4 and BingChat can provide correct answers for more than 80% of the questions, there is still a remaining gap of over 15% to achieve complete accuracy. In the work of Honovich et al. [74], they conducted a review of current factual consistency evaluation methods and highlighted the absence of a unified comparison framework and the limited reference value of related scores compared to binary labels. To address this, they transformed existing fact consistency tasks into binary labels, specifically considering only whether there is a factual conflict with the input text, without factoring in external knowledge. The research discovered that fact evaluation methods founded on natural language inference and question generation answering exhibit superior performance and can complement each other. Pezeshkpour [156] proposed a novel metric, based on information theory, to | 2307.03109#49 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 49 | Gautier Izacard and Edouard Grave. 2021. Lever- aging passage retrieval with generative models
for open domain question answering. In Proc. of EACL.
Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. 2022. Large lan- guage models struggle to learn long-tail knowl- edge. ArXiv:2211.08411.
Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. In Proc. of ACL.
Kalpesh Krishna, Yapei Chang, John Wieting, and Mohit Iyyer. 2022. RankGen: Improving text generation with large ranking models. In Proc. of EMNLP.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, An- drew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: A bench- mark for question answering research. Trans- actions of the Association for Computational Linguistics, 7:452â466. | 2307.03172#49 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 50 | David Nicol, Avril Thomson, and Caroline Breslin. 2014. Rethinking feedback practices in higher ed- ucation: a peer review perspective. Assessment & evaluation in higher education, 39(1):102â122.
OpenAI. 2022. Webgpt annotation guidelines.
OpenAI. 2023. Gpt-4 technical report. ArXiv, abs/2303.08774.
Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation rank- ing: Bringing order to the web. Technical report, Stanford InfoLab.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311â318.
Joon Sung Park, Joseph C OâBrien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Interactive Bernstein. 2023. Generative agents: arXiv preprint simulacra of human behavior. arXiv:2304.03442. | 2307.02762#50 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 50 | question generation answering exhibit superior performance and can complement each other. Pezeshkpour [156] proposed a novel metric, based on information theory, to assess the inclusion of specific knowledge in LLMs. The metric utilized the concept of uncertainty in knowledge to measure factualness, calculated by LLMs filling in prompts and examining the probability distribution of the answer. The paper discussed two methods for injecting knowledge into LLMs: explicit inclusion of knowledge in the prompts and implicit fine-tuning of the LLMs using knowledge-related data. The study demonstrated that this approach surpasses traditional ranking methods by achieving an accuracy improvement of over 30%. Gekhman et al. [55] improved the method for evaluating fact consistency in summarization tasks. It proposed a novel approach that involved training student NLI models using summaries generated by multiple models and annotated by LLMs to ensure fact consistency. The trained student model was then used for summarization fact consistency evaluation. Manakul et al. [133] operated on two hypotheses regarding how LLMs generate factual or hallucinated responses. It proposed the use of three formulas (BERTScore [249], | 2307.03109#50 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 50 | Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proc. of ACL.
Mina Lee, Percy Liang, and Qian Yang. 2022. CoAuthor: Designing a human-AI collaborative writing dataset for exploring language model ca- pabilities. In Proc. of CHI.
Dacheng Li, Rulin Shao, Anze Xie, Ying Sheng, Lianmin Zheng, Joseph E. Gonzalez, Ion Stoica, Xuezhe Ma, , and Hao Zhang. 2023. How long can open-source LLMs truly promise on context length?
Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. When not to trust language models: In- vestigating effectiveness of parametric and non- parametric memories. In Proc. of ACL.
Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. AmbigQA: An- swering ambiguous open-domain questions. In Proc. of EMNLP.
Bennet B. Murdock Jr. 1962. The serial position effect of free recall. Journal of experimental psychology, 64(5):482. | 2307.03172#50 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 51 | Ehud Reiter and Anja Belz. 2009. An investigation into the validity of some metrics for automatically evalu- ating natural language generation systems. Compu- tational Linguistics, 35(4):529â558.
Toby Walsh. 2014. The peerrank method for peer as- In Proceedings of the Twenty-ï¬rst Eu- sessment. ropean Conference on Artiï¬cial Intelligence, pages 909â914.
Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the fac- In Proceedings of tual consistency of summaries. the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 5008â5020.
Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023a. Large language models are not fair eval- uators. | 2307.02762#51 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 51 | operated on two hypotheses regarding how LLMs generate factual or hallucinated responses. It proposed the use of three formulas (BERTScore [249], MQAG [134] and n-gram) to evaluate factuality and employed alternative LLMs to gather token probabilities for black-box language models. The study discovered that simply computing sentence likelihood or entropy helped validate the factuality of the responses. Min et al. [138] broke down text generated by LLMs into individual âatomic" facts, which were then evaluated for their correctness. The FActScore is used to measure the performance of estimators through the calculation of F1 scores. The paper tested various estimators and revealed that current estimators still have some way to go in effectively addressing the task. Lin et al. [119] introduced the TruthfulQA dataset, designed to cause models to make mistakes. Multiple language models were tested by providing factual answers. The findings from these experiments suggest that simply scaling up model sizes may not necessarily improve their truthfulness, and recommendations are provided for the training approach. This dataset has become widely used for evaluating the factuality of LLMs [89, 146, 192, 220]. | 2307.03109#51 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 51 | Bennet B. Murdock Jr. 1962. The serial position effect of free recall. Journal of experimental psychology, 64(5):482.
Joe OâConnor and Jacob Andreas. 2021. What con- text features can Transformer language models use? In Proc. of ACL.
Dimitris Papailiopoulos, Kangwook Lee, and Jy- yong Sohn. 2023. A little retrieval test for large language models. https://github.com/ anadim/the-little-retrieval-test.
Bo Peng. 2023. RWKV-LM. https://github. com/BlinkDL/RWKV-LM.
Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah Smith, and Lingpeng Kong. In Proc. of 2021. Random feature attention. ICLR.
Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rocktäschel, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2020. How context affects language modelsâ factual predictions. In Proc. of AKBC. | 2307.03172#51 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 52 | Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, and Hanna Hajishirzi. 2023b. How far can camels go? exploring the state of instruction tun- ing on open resources. ArXiv, abs/2306.04751.
Fangyuan Xu, Yixiao Song, Mohit Iyyer, and Eunsol Choi. 2023. A critical evaluation of evaluations for In Proceedings of long-form question answering. ACL.
Matthew M Yalch, Erika M Vitale, and J Kevin Ford. 2019. Beneï¬ts of peer review on studentsâ writing. Psychology Learning & Teaching, 18(3):317â325.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Eval- arXiv preprint uating text generation with bert. arXiv:1904.09675. | 2307.02762#52 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 52 | J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
# A Survey on Evaluation of Large Language Models
Table 3. Summary of LLMs evaluation on robustness, ethics, biases, and trustworthiness (ordered by the name of the first author).
â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â
3.2 Robustness, Ethic, Bias, and Trustworthiness The evaluation encompasses crucial aspects of robustness, ethics, biases, and trustworthiness. These factors have gained increasing importance in assessing the performance of LLMs comprehensively. | 2307.03109#52 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 52 | Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y. Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, and Christopher Ré. 2023. Hyena hierarchy: Towards larger con- volutional language models. In Proc. of ICML.
Ofir Press, Noah A. Smith, and Mike Lewis. 2021. Shortformer: Better language modeling using shorter inputs. In Proc. of ACL.
Ofir Press, Noah A. Smith, and Mike Lewis. 2022. Train short, test long: Attention with linear bi- ases enables input length extrapolation. In Proc. of ICLR.
Guanghui Qin, Yukun Feng, and Benjamin Van Durme. 2023. The NLP task effectiveness of long-range transformers. In Proc. of EACL.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Ex- ploring the limits of transfer learning with a uni- fied text-to-text Transformer. Journal of Ma- chine Learning Research, 21(140):1â67. | 2307.03172#52 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 53 | Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena.
Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Towards a uniï¬ed multi- Jiawei Han. 2022. In Pro- dimensional evaluator for text generation. ceedings of the 2022 Conference on Empirical Meth- ods in Natural Language Processing, pages 2023â 2038.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2023. Lima: Less is more for align- ment. arXiv preprint arXiv:2305.11206.
# A Detailed Prompt for Reviews | 2307.02762#53 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 53 | 3.2.1 Robustness. Robustness studies the stability of a system when facing unexpected inputs. Specifically, out-of-distribution (OOD) [207] and adversarial robustness are two popular research topics for robustness. Wang et al. [206] is an early work that evaluated ChatGPT and other LLMs from both the adversarial and OOD perspectives using existing benchmarks such as AdvGLUE [203], ANLI [140], and DDXPlus [41] datasets. Zhuo et al. [267] evaluated the robustness of semantic parsing. Yang et al. [234] evaluated OOD robustness by extending the GLUE [200] dataset. The results of this study emphasize the potential risks to the overall system security when manipulating visual input. For vision-language models, Zhao et al. [258] evaluated LLMs on visual input and transferred them to other visual-linguistic models, revealing the vulnerability of visual input. Li et al. [111] provided an overview of OOD evaluation for language models: adversarial robustness, domain generalization, and dataset biases. Bridging these lines of research, the authors conducted a comparative analysis, unifying the three approaches. | 2307.03109#53 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 53 | Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton- Brown, In- and Yoav Shoham. 2023. context retrieval-augmented language models. ArXiv:2302.00083.
Ohad Rubin and Jonathan Berant. 2023. Long- range language modeling with self-retrieval. ArXiv:2306.13421.
Chinnadhurai Sankar, Sandeep Subramanian, Chris Pal, Sarath Chandar, and Yoshua Bengio. 2019. Do neural dialog systems use the conversation history effectively? an empirical study. In Proc. of ACL.
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools.
Uri Shaham, Maor Ivgi, Avia Efrat, Jonathan Be- rant, and Omer Levy. 2023. ZeroSCROLLS: A zero-shot benchmark for long text understanding. ArXiv:2305.14196.
Vatsal Sharan, Sham Kakade, Percy Liang, and Gregory Valiant. 2018. Prediction with a short memory. In Proc. of STOC. | 2307.03172#53 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 54 | # A Detailed Prompt for Reviews
[System] You are a helpful and precise assistant for checking the quality of the answer. [Question] {Q} [Answer1] {A1} [Answer2] {A2} [System] We would like to request your feedback on the performance of two answers in response to the user question displayed above. Firstly, please compare the two answers based on if they contain unsupported tion, and a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment. Once you have carefully reviewed both submissions, in a new line, choose between the two answers by outputting the number 1 or 2 respectively. Do not output anything else other than the number in this last line.
Table 10: It shows the review template for reviewers with three slots ({Q}, {A1}, and {A2}). Similar to discussion template, we explicitly indicate aspects that reviewers need to pay attention to. As mentioned in Wang et al. (2023a), position bias still exists after em- phasizing it in the prompt.
Table 10 shows the template for reviewers to generate initial reviews.
# B LLM details
As mentioned in section 3.2, we use APIs of GPT-4, GPT-3.5, Claude, and Bard. Currently, the last two modelsâ APIs are free. | 2307.02762#54 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 54 | robustness, domain generalization, and dataset biases. Bridging these lines of research, the authors conducted a comparative analysis, unifying the three approaches. They succinctly outlined the data-generation processes and evaluation protocols for each line of study, all while emphasizing the prevailing challenges and future research prospects. Additionally, Liu et al. [123] introduced a large-scale robust visual instruction dataset to enhance the performance of large-scale multi-modal models in handling relevant images and human instructions. | 2307.03109#54 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 54 | Vatsal Sharan, Sham Kakade, Percy Liang, and Gregory Valiant. 2018. Prediction with a short memory. In Proc. of STOC.
Weijia Shi, Sewon Min, Michihiro Yasunaga, Min- joon Seo, Rich James, Mike Lewis, Luke Zettle- moyer, and Wen tau Yih. 2023. REPLUG: Retrieval-augmented black-box language mod- els. ArXiv:2301.12652.
Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, Morteza Behrooz, William Ngan, Spencer Poff, Naman Goyal, Arthur Szlam, Y-Lan Boureau, Melanie Kambadur, and Jason Weston. 2022. BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage. ArXiv:2208.03188.
Simeng Sun, Kalpesh Krishna, Andrew Mattarella- Micke, and Mohit Iyyer. 2021. Do long-range language models actually use long-range con- text? In Proc. of EMNLP. | 2307.03172#54 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 55 | To generate initial reviews for LFQA (140 ques- tions), GPT-4-0613 costs about $20. For the dis- cussion between GPT-4-0613 and Claude-1 on LFQA, the OpenAI API costs about $24. The price of GPT-3.5-turbo-0613 is 1/20-th and 1/30-th of GPT-4-0613 on inputs and outputs correspond- ingly.
# C Detailed Win rate & Elo Calculation
The algorithm for calculating weighted elo is de- scribed in Algorithm 1. The algorithm for calcu- lating weighted win rate is described in Algorithm 2:
Algorithm 1: Weighted Elo Ratings :B â The list of battle reviews Each review is a 5-tuple (question, contestant A, contestant B, | 2307.02762#55 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 55 | J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
111:13
111:14
Chang et al.
For adversarial robustness, Zhu et al. [264] evaluated the robustness of LLMs to prompts by proposing a unified benchmark called PromptBench. They comprehensively evaluated adversarial text attacks at multiple levels (character, word, sentence, and semantics). The results showed that contemporary LLMs are vulnerable to adversarial prompts, highlighting the importance of the modelsâ robustness when facing adversarial inputs. As for new adversarial datasets, Wang et al. [201] introduced AdvGLUE++ benchmark data for assessing adversarial robustness and implemented a new evaluation protocol to scrutinize machine ethics via jailbreaking system prompts. | 2307.03109#55 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.02762 | 56 | Input reviewer, score) where a score of {-1, 0, 1} means {A wins, tie, B wins} W â The mapping of reviewers to weights Output :Elo â The Elo rating for each contestant 1 K â 32 ; 2 Deï¬ne p(x) = 1 1+10x/400 ; // scale weights so that their mean is 1. 3 W â W/mean(W ) ; 4 Elo â mapping of each contestant in B to 1000. ; 5 foreach (q, i, j, r, s) â B do 6 Ï â W [r] ; rA â Elo[i] ; rB â Elo[j] ; eA â p(rB â rA) ; eB â p(rA â rB) ; // sA has win value of 0, 0.5, or 1 for 7 8 9 10 i loss, tie, or i win sA â (1 â s)/2 ; sB â 1 â sA ; Increment Elo[i] by ÏK(sA â eA) ; Increment Elo[j] by ÏK(sB â eB) ; 11 12 13 14 15 end 16 return Elo
# Algorithm 2: Weighted Win Rates | 2307.02762#56 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 56 | 3.2.2 Ethic and bias. LLMs have been found to internalize, spread, and potentially magnify harmful information existing in the crawled training corpora, usually, toxic languages, like offensiveness, hate speech, and insults [53], as well as social biases like stereotypes towards people with a particular demographic identity (e.g., gender, race, religion, occupation, and ideology) [175]. More recently, Zhuo et al. [266] used conventional testing sets and metrics [37, 53, 153] to perform a systematic evaluation of ChatGPTâs toxicity and social bias, finding that it still exhibits noxious content to some extend. Taking a further step, Deshpande et al. [35] introduced role-playing into the model and observed an increase in generated toxicity up to 6x. Furthermore, such role-playing also caused biased toxicity towards specific entities. Different from simply measuring social biases, Ferrara [42] investigated the sources, underlying mechanisms, and corresponding ethical consequences of these biases potentially produced by ChatGPT. Beyond social biases, LLMs have also been assessed by political tendency and personality traits | 2307.03109#56 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 56 | Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng- Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Mor- ris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vin- odkumar Prabhakaran, Mark Diaz, Ben Hutchin- son, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew | 2307.03172#56 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 57 | Input :B â The list of battle reviews Each review is a 5-tuple (question, contestant A, contestant B, reviewer, score) where a score of {-1, 0, 1} means {A wins, tie, B wins} Iters â The number of iterations to run Output :S â The win-rate for each contestant W â The resulting weights at the end 1 C â set of contestants in B ; 2 R â set of reviewers in B ; 3 W â mapping of each reviewer to 1/|R| ; 4 for 1 to Iters do 5 6 // No. of reviews for each contestant N â mapping of each c â C to 0 ; // Weighted wins for each contestant V â mapping of each c â C to 0; 7 foreach (q, i, j, r, s) â B do 8 9 // Update number of reviews Increment N [i] by 1 ; Increment N [j] by 1 ; Ï â W [r] ; /* maps (loss=-1, tie=0, win=1) to (0, 0.5, 1) Deï¬ne f (x) = (1 + x)/2 ; Increase V [i] by Ï Â· f (âs) ; | 2307.02762#57 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 57 | ethical consequences of these biases potentially produced by ChatGPT. Beyond social biases, LLMs have also been assessed by political tendency and personality traits [65, 167] based questionnaires like the Political Compass Test and MBTI test, demonstrating a propensity for progressive views and an ENFJ personality type. In addition, LLMs like GPT-3 were found to have moral biases [176] in terms of the Moral Foundation theory [58]; The study conducted by [69] reveals that existing LMs have potential in ethical judgment, but still need improvement. [256] proposes a Chinese conversational bias evaluation dataset CHBias, discovers bias risks in pre-trained models, and explores debiasing methods. Moreover, in the assessment of GPT-4 alignment, [209] discovered a systematic bias. ChatGPT is also observed to exhibit somewhat bias on cultural values [16]. Wang et al. [201] also incorporated an evaluation dataset specifically aimed at gauging stereotype bias, using both targeted and untargeted system prompts. All these ethical issues might elicit serious risks, impeding the deployment of LLMs and having a profound negative impact on | 2307.03109#57 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.02762 | 59 | 10
11
12
13
14
15
# 16 17 end 18 return S, W
D Pairwise win rate heatmap
Model B claude vicuna Fraction of Model A Wins For All A vs. B Battles (GPT-4 review
claude vicuna Model B claude vicuna os 07 06 os o4 03 o2
Model B claude vicuna os 07 06 < 3 os = o4 03 o2 Fraction of Model A Wins For All A vs. B Battles (Weighted)
Figure 8: Pairwise win rate heatmap (Left: reviewer all (weighted); Right: GPT-4).
Model B gpt-4 claude 13.5 vicuna < 3 2 vicunal Fraction of Model A Wins For All A vs. B Battles (Chatbot Arena)
Model B claude vicuna os 07 06 os o4 03 o2 Fraction of Model A Wins For All A vs. B Battles (human reviews)
Model B Model B gpt-4 claude 13.5 vicuna claude vicuna vicunal
Figure 9: Pairwise win rate heatmap (Left: arena leaderboard; Right: our human).
# E Human Annotation for Pairwise Preference | 2307.02762#59 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 59 | 3.2.3 Trustworthiness. Some work focuses on other trustworthiness problems in addition to ro- bustness and ethics.3 In their 2023 study, DecodingTrust, Wang et al. [201] offered a multifaceted exploration of trustworthiness vulnerabilities in the GPT models, especially GPT-3.5 and GPT-4. Their evaluation expanded beyond the typical trustworthiness concerns to include eight critical aspects: toxicity, stereotype bias, adversarial and out-of-distribution robustness, robustness to adver- sarial demonstrations, privacy, machine ethics, and fairness. DecodingTrustâs investigation employs an array of newly constructed scenarios, tasks, and metrics. They revealed that while GPT-4 often showcases improved trustworthiness over GPT-3.5 in standard evaluations, it is simultaneously more susceptible to attacks.
In another study by Hagendorff and Fabi [62], LLMs with enhanced cognitive abilities were evaluated. They found that these models can avoid common human intuitions and cognitive errors, demonstrating super-rational performance. By utilizing cognitive reflection tests and semantic illusion experiments, the researchers gained insights into the psychological aspects of LLMs. This method offers new perspectives for evaluating model biases and ethical issues that may not have been previously identified. Furthermore, a study by [228] brings attention to a significant concern: the consistency of judgment in LLMs diminishes notably when faced with disruptions such as | 2307.03109#59 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 59 | Hugo Touvron, Louis Martin, Kevin Stone, Pe- ter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernan- des, Jeremy Fu, Wenyin Fu, Brian Fuller, Cyn- thia Gao, Vedanuj Goswami, Naman Goyal, An- thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Ko- renev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin | 2307.03172#59 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 60 | Figure 9: Pairwise win rate heatmap (Left: arena leaderboard; Right: our human).
# E Human Annotation for Pairwise Preference
Since completing one HIT can take a considerable amount of time (6-10 min), we added a button that allows saving their work at any stage in the middle of the HIT. This button populates a text area with a JSON representation of the current responses, which may be copied into a ï¬le.
We annotate part of the pairwise comparisons of model answers on Vicuna80. We built an interface form. The screenshot is as illustrated in Figure 11. | 2307.02762#60 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 60 | 3The term âtrustworthinessâ in this section refers to other work that contains more than robustness and ethics.
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
A Survey on Evaluation of Large Language Models
questioning, negation, or misleading cues, even if their initial judgments were accurate. The research delves into various prompting methods designed to mitigate this issue and successfully demonstrates their efficacy. | 2307.03109#60 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 60 | Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, An- gela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. ArXiv:2307.09288.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. In Proc. of NeurIPS.
Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Lin- former: Self-attention with linear complexity. ArXiv:2006.04768. | 2307.03172#60 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 61 | We would like to request your feedback on the performance of several Al assistants in response to the user question displayed below. Please consider the helpfulness, relevance, accuracy, level of details of their responses. Then look at the pairwise comparisons and choose the best response, avoiding any potential bias, ensuring that the order in which the responses were presented does not affect your judgment. For your reference, we also provide the review of GPT4 on the responses. Since GPT4 has an order bias, we have provided the reviews of GPT4 on both (ResponseA, ResponseB) and (ResponseB, ResponseA) as (Assistant 1, Assistant 2). Elapsed Time: 0:0:29 Question: Use an appropriate format to structure a formal letter of recommendation for a student applying to a prestigious graduate program in computer science. Response 1 | Response 2 | Response 3 | Response 4 | Response 5 | Overall Comparisons Response 1: Here is a draft formal letter of recommendation for the student: Your Letterhead Date To the Admissions Committee: | am writing to recommend [Student Name] for admission to the Master's program in Computer Science at [College or University Name]. | have known [himvher] for [length of time] in my capacity as [relationship to student] | 2307.02762#61 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 61 | questioning, negation, or misleading cues, even if their initial judgments were accurate. The research delves into various prompting methods designed to mitigate this issue and successfully demonstrates their efficacy.
LLMs are capable of generating coherent and seemingly factual text. However, the information generated can include factual inaccuracies or statements ungrounded in reality, a phenomenon known as hallucination [163, 253]. Evaluating these issues helps improve the training methods of LLMs to reduce the occurrence of hallucinations. For the evaluation of illusions in large-scale visual models, Liu et al. [123] introduced a comprehensive and robust large-scale visual instruction dataset: LRV-Instruction. Through the GAVIE method, they fine-tuned the evaluation visual instructions, and experimental results demonstrated that LRV-Instruction effectively alleviates illusions in LLMs. In addition, Li et al. [113] conducted an assessment of illusions in large-scale visual language models, revealing through experiments that the distribution of objects in visual instructions significantly impacts object illusions in LVLMs. To enhance the assessment of object illusions in LVLMs, they introduced a polling-based query method, known as POPE. This method provides an improved evaluation of object illusions in LVLMs. | 2307.03109#61 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 61 | Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Lin- former: Self-attention with linear complexity. ArXiv:2006.04768.
Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big Bird: Transformers for longer sequences. In Proc. of NeurIPS.
# A Ambiguity in Multi-Document QA Distractor Documents
Following past work on NaturalQuestions-Open (Izacard et al., 2021; Izacard and Grave, 2021, inter alia), we use a Wikipedia dump from late 2018 as our retrieval corpus. However, this standard Wikipedia dump has a small amount of temporal mismatch with the NaturalQuestions annotations. For example, consider the question âwhat nfl team does robert griffin iii play forâ. The Natu- ralQuestions annotated answer is âcurrently a free agentâ. However, the Wikipedia retrieval corpus contains the information that he plays for the âBalti- more Ravensâ, since he was released from the team between the Wikipedia dumpâs timestamp and the NaturalQuestions annotation process. | 2307.03172#61 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 62 | to the Master's program in Computer Science at [College or University Name]. | have known [himvher] for [length of time] in my capacity as [relationship to student] at [institution name]. [Student Name] has been an outstanding student in my [course name] course. From my interactions with [him/her], | have found [him/her] to be an exceptionally gifted student with a strong work ethic and intellectual curiosity. [He/She] has demonstrated a keen interest in the field of [area of interest}, and | am confident [he/she] would thrive in your program. Specifically, (Student Name] [description of student's key attributes and accomplishments]. [He/She] has shown strong leadership ability and communication skills, as evidenced by [examples and accomplishments]. [He/She] has also developed innovative solutions to complex problems, as demonstrated by [his/her] work on [project or paper name]. In summary, | highly and enthusiastically recommend [Student Name] for admission to your prestigious Master's program. [He/She] will be an asset to your program and the field. Please do not hesitate to contact me if you require any further information. Sincerely, [Your name] [Your title] | 2307.02762#62 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 62 | 3.3 Social Science Social science involves the study of human society and individual behavior, including economics, sociology, political science, law, and other disciplines. Evaluating the performance of LLMs in social science is important for academic research, policy formulation, and social problem-solving. Such evaluations can help improve the applicability and quality of models in the social sciences, increasing understanding of human societies and promoting social progress.
Wu et al. [224] evaluated the potential use of LLMs in addressing scaling and measurement issues in social science and found that LLMs can generate meaningful responses regarding political ideology and significantly improve text-as-data methods in social science.
In computational social science (CSS) tasks, Ziems et al. [269] presented a comprehensive evalu- ation of LLMs on several CSS tasks. During classification tasks, LLMs exhibit the lowest absolute performance on event argument extraction, character tropes, implicit hate, and empathy clas- sification, achieving accuracy below 40%. These tasks either involve complex structures (event arguments) or subjective expert taxonomies with semantics that differ from those learned during LLM pretraining. Conversely, LLMs achieve the best performance on misinformation, stance, and emotion classification. When it comes to generation tasks, LLMs often produce explanations that surpass the quality of gold references provided by crowd workers. In summary, while LLMs can greatly enhance the traditional CSS research pipeline, they cannot completely replace it. | 2307.03109#62 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 62 | We use the ambiguity annotations of Min et al. (2020) to create a subset unambiguous questions. Experiments on this unambiguous subset of the data show similar results and conclusions as the experiments on the full questions collection (Fig- ure 12).
20 Total Retrieved Documents. (~4K tokens, unambiguous questions)
Accuracy e - ~ - ~e---@ ist 5th 10th 15th 20th Position of Document with the Answer =â@â claude-1.3 =@- claude-1.3-100k =â@â gpt-3.5-turbo-0613 =@= gpt-3.5-turbo-16k-0613 =@®= mpt-30b-instruct longchat-13b-16k
Figure 12: Language model performance on a unam- biguous subset of questions.
# B Random Distractors in Multi-Document QA | 2307.03172#62 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.03109 | 63 | Some articles also evaluate LLMs on legal tasks. The zero-shot performance of LLMs is mediocre in legal case judgment summarization. LLMs have several problems, including incomplete sen- tences and words, meaningless sentences merge, and more serious errors such as inconsistent and hallucinated information [34]. The results showed that further improvement is necessary for LLMs to be useful for case judgment summarization by legal experts. Nay et al. [139] indicated that LLMs, particularly when combined with prompting enhancements and the correct legal texts, could perform better but not yet at expert tax lawyer levels.
Lastly, within the realm of psychology, Frank [44] adopted an interdisciplinary approach and drew insights from developmental psychology and comparative psychology to explore alternative methods for evaluating the capabilities of LLMs. By integrating different perspectives, researchers can deepen their understanding of the essence of cognition and effectively leverage the potential of advanced technologies such as large language models, while mitigating potential risks.
In conclusion, the utilization of LLMs has significantly benefited individuals in addressing social science-related tasks, leading to improved work efficiency. The outputs produced by LLMs serve as
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
111:15
111:16
Chang et al.
Table 4. Summary of evaluations on natural science and engineering tasks based on three aspects: Mathematics, General science and Engineering (ordered by the name of the first author). | 2307.03109#63 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 63 | Figure 12: Language model performance on a unam- biguous subset of questions.
# B Random Distractors in Multi-Document QA
We also run multi-document question answering experiments with random Wikipedia documents as distractors, which allows us to ablate the impact of retrieved distractors (hard negatives). Note that in this setting, the the document containing the an- swer can often be identified with simple heuristics (e.g., lexical overlap with the query). Figure 13 presents the results of this experiment. Although all models have higher absolute accuracy in this setting, they surprisingly still struggle to reason over their entire input context, indicating that their performance degradation is not solely due to an inability to identify relevant documents.
# C Randomizing Distractor Order in Multi-Document QA | 2307.03172#63 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.03109 | 64 | Chang et al.
Table 4. Summary of evaluations on natural science and engineering tasks based on three aspects: Mathematics, General science and Engineering (ordered by the name of the first author).
Reference Arora et al. [3] Bubeck et al. [15] Castro Nascimento and Pimentel [18] Collins et al. [27] Dao and Le [31] Guo et al. [61] Liu et al. [125] Pallagani et al. [150] Sridhara et al. [181] Valmeekam et al. [194] Valmeekam et al. [195] Wei et al. [221] Wu et al. [225] Yuan et al. [241] Yu et al. [237] Zhuang et al. [265] Mathematics General science Engineering â â â â â â â â â â â â â â â â â â
valuable resources for enhancing productivity. However, it is crucial to acknowledge that existing LLMs cannot completely replace human professionals in this domain.
3.4 Natural Science and Engineering Evaluating the performance of LLMs in natural science and engineering can help guide applications and development in scientific research, technology development, and engineering studies. | 2307.03109#64 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 64 | # C Randomizing Distractor Order in Multi-Document QA
Our prompt instructs the language model to use the provided search results to answer the question. There may be a prior in the pre-training or instruc- tion fine-tuning data to treat search results as sorted by decreasing relevance (i.e., the documents near the beginning of the input context are more likely to be useful than those at the end). To validate that our conclusions are not simply a byproduct of this bias, we run experiments with the modified instruction âWrite a high-quality answer for the given ques- tion using only the provided search results (some of which might be irrelevant). The search results are ordered randomly.â In addition, we randomly shuffle the k â 1 distractor documents.
20 Total Retrieved Documents. (~4K tokens, random distractors)
(~4K tokens, random distractors) Accuracy ~ oo uw oOo x o a a ist 5th 10th 15th 20th Position of Document with the Answer =â@â claude-1.3 =@= gpt-3.5-turbo-16k-0613 =@= claude-1.3-100k =@®= mpt-30b-instruct =â@â gpt-3.5-turbo-0613 @â longchat-13b-16k | 2307.03172#64 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 65 | We woul ket request our feedback onthe perfomance of scveral Al assists in reponse othe user cueston diplayed below. Pleese corsier ne heiness relevance, accuracy level of etl their responses Then lock atthe psiuisecarpaisons and choose the best respons, avoiding ny pfeil bes, ensuring tha he erdrin which theresponsas weve presered does rt aft yur judgment For you rference, we also provide the evew of CPT on he respnses, Shee GPT¢ hasan erderbias, we have provided the revews of Pon bth (ResporseA, Response) and (Respense3, ResponseA) as (Asstt 1, sistaâ 2}. lysed Tine. 0223 Question: What are the ray fectrs het nftuencecorcumer behavior? (ResprseLRespanse2 [espe 3 [Response [Resprses [owe Corsi) Overall Comparisons Response A. Response B âte esa pinay atone Tae mera pay ac tla eos sensible consumer behavior nein 1 Pes {'Besonal factors: These include icvcua chereterntes sich ae age, gence, flow These cule naval chrenterates citeaon nec | 2307.02762#65 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 65 | 3.4 Natural Science and Engineering Evaluating the performance of LLMs in natural science and engineering can help guide applications and development in scientific research, technology development, and engineering studies.
3.4.1 Mathematics. For fundamental mathematical problems, most large language models (LLMs) demonstrate proficiency in addition and subtraction, and possess some capability in multiplication. However, they face challenges when it comes to division, exponentiation, trigonometry functions, and logarithm functions. On the other hand, LLMs exhibit competence in handling decimal numbers, negative numbers, and irrational numbers [241]. In terms of performance, ChatGPT and GPT-4 outperform other models significantly, showcasing their superiority in solving mathematical tasks [221]. These two models have a distinct advantage in dealing with large numbers (greater than 1e12) and complex, lengthy mathematical queries. GPT-4 outperforms ChatGPT by achieving a significant increase in accuracy of 10 percentage points and a reduction in relative error by 50%, due to its superior division and trigonometry abilities, proper understanding of irrational numbers, and consistent step-by-step calculation of long expressions. | 2307.03109#65 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 65 | Figure 13: Language model performance on multi- document QA when using random distractors, rather than retrieved distractors.
Figure 14 presents the results of this experiment. We continue to see a U-shaped performance curve, with performance degrading when language mod- els must use information in the middle of their input contexts. Comparing the results in §2.3 with those when randomizing the distractor order and mentioning such in the prompt, we see that ran- domization slightly decreases performance when the relevant information is at the very beginning of the context, and slightly increases performance when using information in the middle and end of the context.
# D GPT-4 Performance
We evaluate GPT-4 (8K) on a subset of 500 ran- dom multi-document QA examples with 20 total documents in each input context (Figure 15). GPT- 4 achieves higher absolute performance than any other language model, but still shows a U-shaped performance curveâits performance is highest when relevant information occurs at the very start or end of the context, and performance degrades when it must use information in the middle of its input context.
# E Llama-2 Performance | 2307.03172#65 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 66 | {'Besonal factors: These include icvcua chereterntes sich ae age, gence, flow These cule naval chrenterates citeaon nec posendlgyandvalees: 2: Poy olgeal ore These nek Such as age, gee, nine, edacaton, rotor ee, an pcopd rs alc cre na posses icin ad pods nd serie. 3 See as: Fesmiggardieie 2 Porodagea prose lors These nude fe corsets nese elie cual, socal end ervormertal avs thal fee ie it ree. ines. cvsuer ewan sero an sewers 4 Economy These cite âStudes,balfs ard values. Secalfactors- ocr such a he pie ola produto sori, he valet of ate âhoes nels ie consumarafamiy,teondy, profuse orsevovs ane the consumors ucnsang power Masi tr: covial lass culure an reference Gouge. 4. These ncude the way apoduet or oer voz marke, ncn eve, Sutin asus: Treseirciae fe plyscal packaged Jsttbuon 6. Prod ed selvoe tacts. Tese | 2307.02762#66 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 66 | When confronted with complex and challenging mathematical problems, LLMs exhibit subpar performance. Specifically, GPT-3 demonstrates nearly random performance, while GPT-3.5 shows improvement, and GPT-4 performs the best [3]. Despite the advancements made in the new models, it is important to note that the peak performance remains relatively low compared to that of experts and these models lack the capability to engage in mathematical research [15]. The specific tasks of algebraic manipulation and calculation continue to pose challenges for GPTs [15, 27]. The primary reasons behind GPT-4âs low performance in these tasks are errors in algebraic manipulation and difficulties in retrieving pertinent domain-specific concepts. Wu et al. [225] evaluated the use of GPT-4 on difficult high school competition problems and GPT-4 reached 60% accuracy on half of the categories. Intermediate algebra and precalculus can only be solved with a low accuracy rate of around 20%. ChatGPT is not good at answering questions on topics including derivatives and applications, Oxyz spatial calculus, and spatial geometry [31]. Dao and Le [31], Wei et al. [221]
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
A Survey on Evaluation of Large Language Models | 2307.03109#66 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 66 | # E Llama-2 Performance
We evaluate Llama-2 (Touvron et al., 2023b) on multi-document QA with 20 total documents in each input context. The Llama tokenizer pro- duces longer sequences than the tokenizers for our previously-studied models, so we discard 20 exam20 Total Retrieved Documents (~4K tokens, randomly ordered)
Accuracy ao ~~ ~ wu oOo uw a o 55 es- @-~_.___.---* ist 5th 10th 15th 20th Position of Document with the Answer =@ claude-1.3 =@= gpt-3.5-turbo-16k-0613 =â@®- claude-1.3-100k =@®= mpt-30b-instruct =@- gpt-3.5-turbo-0613 @= longchat-13b-16k
Figure 14: Language model performance when random- izing the order of the distractors (rather than presenting them in order of decreasing relevance) and mentioning as such in the prompt.
# 20 Total Retrieved Documents | 2307.03172#66 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 67 | marke, ncn eve, Sutin asus: Treseirciae fe plyscal packaged Jsttbuon 6. Prod ed selvoe tacts. Tese Ince the rnfrmer te, end mood che consumer Guay, estes, anc berets oe ereduct or sevice aswel as Jet, se Mlerbenng i factors ese neo Beromance, rd arerty Undersanding hese factors can hel smeses ard pte fen emo, rd ison, and za uy sete arg ages wir et acs ay aol ha eoncuneâ'e ceclaon making safes to mcot the neods and poeronessg for age esionce proces, (EEE a RS] Compare Responses Lard 2: Response Lie bai °Resparee 2ie bate, The respanses are EXACTLY equal in cuaiy (Cisco Sw ReneS] Compare Responses 1 ard 3: Response Lis beter o Response Sis betetThe responses are EXACTLY equa im cual (EEELE Hacer] Compare Responses Land 4: Response Lie bat °Respance die bate, The responses are EXACTLY equal cual (GRETA) Compare Responses Land 5: Response Lis beter o Response Sis betel The | 2307.02762#67 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 67 | J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
A Survey on Evaluation of Large Language Models
showed that ChatGPTâs performance worsens as task difficulty increases: it correctly answered 83% of the questions at the recognition level, 62% at the comprehension level, 27% at the application level, and only 10% at the highest cognitive complexity level. Given those problems at higher knowledge levels tend to be more complex, requiring in-depth understanding and problem-solving skills, such results are to be expected.
These results indicate that the effectiveness of LLMs is highly influenced by the complexity of problems they encounter. This finding holds significant implications for the design and development of optimized artificial intelligence systems capable of successfully handling these challenging tasks. | 2307.03109#67 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 67 | # 20 Total Retrieved Documents
(~4K tokens, 500 question sample) 90 80 > uU 570 uU uu a 60 on ~e =e nl 50 ce Ist 5th loth 15th ~â20th Position of Document with the Answer =@- claude-1.3 =@®= mpt-30b-instruct =@®- claude-1.3-100k = longchat-13b-16k =@= gpt-3.5-turbo-0613 © gpt-4-0613 =@= gpt-3.5-turbo-16k-0613
Figure 15: Although GPT-4 has higher absolute perfor- mance than other models, its performance still degrades when relevant information occurs in the middle of the input context.
ples (out of 2655) that exceed Llama-2âs maximum context length of 4096 tokens. We experiment with models of varying sizes (7B, 13B, and 70B pa- rameters), with and without additional supervised fine-tuning and reinforcement learning from hu- man feedback (â-chat-â models). The results are presented in Figure 16. | 2307.03172#67 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 68 | °Respance die bate, The responses are EXACTLY equal cual (GRETA) Compare Responses Land 5: Response Lis beter o Response Sis betel The fespanses are EXACTLY equa i cualy âlsicn ShonRssponse] Compare Responses 2 and 3: Response is beter Resporse 3 ete, The responses are EXACTLY equal in cui (ERB ea] Compare Responses ard 4: Response Zis bei © Respanse dis batt The respanses are EXACTLY equal in cua (GisLb SuTRaneS) Compare Responses 2 ard 5: Response Zis beter o Response Sis betet.The responses are EXACTLY equa im cuaiy (EB Tea RS] Compare Responses Sard 4: Response Sie bat © Resparee Ae batt, The respanses are EXACTLY equal in cua (CEE SToaT ERE] Compare Responses Sara's: Response Sis beter. 0 Resparse S's betel, The Fespanses are EXACTLY equa cUaly [EEE IES Reape] Compare Responses 4 and 5: Response 4 beter. Response i beter. The responses ae EXACTLY eal incl, senatnan | 2307.02762#68 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 68 | 3.4.2 General science. Further improvements are needed in the application of LLMs in the field of chemistry. Castro Nascimento and Pimentel [18] presented five straightforward tasks from various subareas of chemistry to assess ChatGPTâs comprehension of the subject, with accuracy ranging from 25% to 100%. Guo et al. [61] created a comprehensive benchmark that encompasses 8 practical chemistry tasks, which is designed to assess the performance of LLMs (including GPT-4, GPT-3.5, and Davinci-003) for each chemistry task. Based on the experiment results, GPT-4 demonstrates superior performance compared to the other two models. [3] showed that LLMs perform worse on physics problems than chemistry problems, probably because chemistry problems have lower inference complexity than physics problems in this setting. There are limited evaluation studies on LLMs in the field of general science, and the current findings indicate that further improvement is needed in the performance of LLMs within this domain.
3.4.3 Engineering. Within engineering, the tasks can be organized in ascending order of difficulty, including code generation, software engineering, and commonsense planning. | 2307.03109#68 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 68 | Comparing Llama-2 models of varying sizes, we find that only the larger models (13B and 70B) exhibit the U-shaped performance curve (i.e., both primacy and recency bias)âthe smallest Llama- 2 models (7B) are solely recency-biased. Given these results, we hypothesize that prior work (e.g., Khandelwal et al., 2018; Sun et al., 2021) did not previously observe any primacy bias in language models because the models they studied were too small (less than 1B parameters). | 2307.03172#68 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.03109 | 69 | In code generation tasks, the smaller LLMs trained for the tasks are competitive in performance, and CodeGen-16B [141] is comparable in performance to ChatGPT using a larger parameter setting, reaching about a 78% match [125]. Despite facing challenges in mastering and comprehending certain fundamental concepts in programming languages, ChatGPT showcases a commendable level of coding level [265]. Specifically, ChatGPT has developed superior skills in dynamic programming, greedy algorithm, and search, surpassing highly capable college students, but it struggles in data structure, tree, and graph theory. GPT-4 demonstrates an advanced ability to generate code based on given instructions, comprehend existing code, reason about code execution, simulate the impact of instructions, articulate outcomes in natural language, and execute pseudocode effectively [15]. In software engineering tasks, ChatGPT generally performs well and provides detailed responses, often surpassing both human expert output and SOTA output. However, for certain tasks such as code vulnerability detection and information retrieval-based test prioritization, the current version of ChatGPT fails to provide accurate answers, rendering it unsuitable for these specific | 2307.03109#69 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 69 | Comparing between Llama-2 models with and without additional supervised fine-tuning and re- inforcement learning from human feedback, we see that additional fine-tuning dramatically im- proves performance on the multi-document QA task. The 7B models with and without additional fine-tuning show minimal primacy bias, and are largely recency-biased. The 13B base model has a dramatic primacy and recency biasâthere is a 20-point accuracy disparity between the best- and worst-case performance. Applying additional fine- tuning to the 13B seems to slightly reduce this bias (10-point worst-case degradation), but the bias remains significant. However, the 70B models with and without additional fine-tuning have largely similar trends (showing both primacy and recency bias), and additional fine-tuning minimally changes the positional bias severity.
20 Total Retrieved Documents (~4K tokens) | 2307.03172#69 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
2307.02762 | 70 | Figure 11: The form used by human annotators for pairwise comparisons between model answers. Each pair of comparisons has buttons to choose which model is best, along with an area to provide an explanation. An associated button hides/shows the respective responses and automated comparisons.
# F Discussion Examples
In this section, there are four examples showing opinion-altering (OA), opinion-holding (OH), and post- agreement opinion-altering. In the following discussions, all texts before a colored reviewerâs name are the input for that reviewer. the text before "[System]" is the reviewerâs original output and the text after "[System]" is added after each round which reminds the nex reviewer about its role.
The following example is a discussion between GPT-3.5 and Claude-1. In this example, GPT-3.5 alters its opinion to agree with Claude-1, while Claude-1 holds its opinion.
GPT-3.5 Claude-1 Discussion (GPT-3.5 Leads)
System: You are reviewer 1, discussing with reviewer 2 about your reviews of the following answers. Background: [Question] When the joint-stock company was ï¬rst invented, was there a lot of pushback on the concept? What were some of the concerns? Also any recommended books on the invention of the concept would be much appreciated! | 2307.02762#70 | PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations | Nowadays, the quality of responses generated by different modern large
language models (LLMs) are hard to evaluate and compare automatically. Recent
studies suggest and predominantly use LLMs as a reference-free metric for
open-ended question answering. More specifically, they use the recognized
"strongest" LLM as the evaluator, which conducts pairwise comparisons of
candidate models' answers and provides a ranking score. However, this intuitive
method has multiple problems, such as bringing in self-enhancement (favoring
its own answers) and positional bias. We draw insights and lessons from the
educational domain (Cho and MacArthur, 2011; Walsh, 2014) to improve LLM-based
evaluations. Specifically, we propose the (1) peer rank (PR) algorithm that
takes into account each peer LLM's pairwise preferences of all answer pairs,
and outputs a final ranking of models; and (2) peer discussion (PD), where we
prompt two LLMs to discuss and try to reach a mutual agreement on preferences
of two answers. We conduct experiments on two benchmark datasets. We find that
our approaches achieve higher accuracy and align better with human judgments,
respectively. Interestingly, PR can induce a relatively accurate self-ranking
of models under the anonymous setting, where each model's name is unrevealed.
Our work provides space to explore evaluating models that are hard to compare
for humans. | http://arxiv.org/pdf/2307.02762 | Ruosen Li, Teerth Patel, Xinya Du | cs.CL, cs.AI | null | null | cs.CL | 20230706 | 20230706 | [
{
"id": "1803.05457"
},
{
"id": "2112.09332"
},
{
"id": "2304.03442"
},
{
"id": "2306.04181"
},
{
"id": "2302.04166"
},
{
"id": "2112.00861"
},
{
"id": "2305.14314"
},
{
"id": "2211.09110"
},
{
"id": "1904.09675"
},
{
"id": "2305.14627"
},
{
"id": "2305.11206"
},
{
"id": "2305.10142"
},
{
"id": "2303.17760"
},
{
"id": "2305.14387"
},
{
"id": "2303.16634"
}
] |
2307.03109 | 70 | and information retrieval-based test prioritization, the current version of ChatGPT fails to provide accurate answers, rendering it unsuitable for these specific tasks [181]. In commonsense planning tasks, LLMs may not perform well, even in simple planning tasks where humans excel [194, 195]. Pallagani et al. [150] demonstrated that the fine-tuned CodeT5 [214] performs the best across all considered domains, with the shortest inference time. Moreover, it explored the capability of LLMs for plan generalization and found that their generalization capabilities appear to be limited. It turns out that LLMs can handle simple engineering tasks, but they perform poorly on complex engineering tasks. | 2307.03109#70 | A Survey on Evaluation of Large Language Models | Large language models (LLMs) are gaining increasing popularity in both
academia and industry, owing to their unprecedented performance in various
applications. As LLMs continue to play a vital role in both research and daily
use, their evaluation becomes increasingly critical, not only at the task
level, but also at the society level for better understanding of their
potential risks. Over the past years, significant efforts have been made to
examine LLMs from various perspectives. This paper presents a comprehensive
review of these evaluation methods for LLMs, focusing on three key dimensions:
what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide
an overview from the perspective of evaluation tasks, encompassing general
natural language processing tasks, reasoning, medical usage, ethics,
educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the
evaluation methods and benchmarks, which serve as crucial components in
assessing performance of LLMs. Then, we summarize the success and failure cases
of LLMs in different tasks. Finally, we shed light on several future challenges
that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to
researchers in the realm of LLMs evaluation, thereby aiding the development of
more proficient LLMs. Our key point is that evaluation should be treated as an
essential discipline to better assist the development of LLMs. We consistently
maintain the related open-source materials at:
https://github.com/MLGroupJLU/LLM-eval-survey. | http://arxiv.org/pdf/2307.03109 | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie | cs.CL, cs.AI | Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST); 45 pages; More recent works; https://llm-eval.github.io/ | null | cs.CL | 20230706 | 20231229 | [
{
"id": "2212.13138"
},
{
"id": "2305.14693"
},
{
"id": "2108.07258"
},
{
"id": "2309.10691"
},
{
"id": "2306.09212"
},
{
"id": "2308.08833"
},
{
"id": "2304.00228"
},
{
"id": "2303.02155"
},
{
"id": "2310.02174"
},
{
"id": "2305.15771"
},
{
"id": "2104.14337"
},
{
"id": "2305.10355"
},
{
"id": "2305.10263"
},
{
"id": "2306.04757"
},
{
"id": "2307.00184"
},
{
"id": "2205.01068"
},
{
"id": "2304.06364"
},
{
"id": "2305.13788"
},
{
"id": "2305.02182"
},
{
"id": "2304.01457"
},
{
"id": "2305.07609"
},
{
"id": "2305.17306"
},
{
"id": "2304.09542"
},
{
"id": "2305.14982"
},
{
"id": "2206.04615"
},
{
"id": "2306.02408"
},
{
"id": "2306.01337"
},
{
"id": "2306.01590"
},
{
"id": "2305.03514"
},
{
"id": "2304.03738"
},
{
"id": "2303.13835"
},
{
"id": "2306.02864"
},
{
"id": "2303.12712"
},
{
"id": "2306.04504"
},
{
"id": "2206.10498"
},
{
"id": "2105.09938"
},
{
"id": "2304.07333"
},
{
"id": "2307.00112"
},
{
"id": "2305.13711"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2306.07799"
},
{
"id": "2301.12307"
},
{
"id": "2307.01135"
},
{
"id": "2306.04618"
},
{
"id": "2305.11700"
},
{
"id": "2306.05179"
},
{
"id": "2306.07075"
},
{
"id": "2305.19555"
},
{
"id": "2301.01768"
},
{
"id": "2304.07619"
},
{
"id": "2305.15269"
},
{
"id": "2304.02210"
},
{
"id": "2009.03300"
},
{
"id": "2305.16151"
},
{
"id": "2306.13394"
},
{
"id": "2306.04926"
},
{
"id": "2305.18486"
},
{
"id": "2304.08244"
},
{
"id": "2301.13867"
},
{
"id": "2008.02275"
},
{
"id": "2301.12868"
},
{
"id": "2305.09645"
},
{
"id": "2211.09110"
},
{
"id": "2310.20499"
},
{
"id": "2303.09038"
},
{
"id": "2305.16837"
},
{
"id": "2308.02490"
},
{
"id": "2306.11698"
},
{
"id": "2302.14045"
},
{
"id": "2308.03656"
},
{
"id": "2306.11507"
},
{
"id": "2304.02015"
},
{
"id": "2306.01499"
},
{
"id": "1910.13461"
},
{
"id": "1910.14599"
},
{
"id": "2306.09296"
},
{
"id": "2210.07197"
},
{
"id": "2309.07915"
},
{
"id": "2005.04118"
},
{
"id": "2306.04610"
},
{
"id": "2305.14387"
},
{
"id": "2306.02549"
},
{
"id": "2304.04339"
},
{
"id": "2305.11171"
},
{
"id": "2211.08073"
},
{
"id": "2305.15074"
},
{
"id": "2301.11596"
},
{
"id": "2303.17580"
},
{
"id": "2309.11998"
},
{
"id": "1909.08593"
},
{
"id": "2210.02414"
},
{
"id": "2306.16636"
},
{
"id": "2304.01938"
},
{
"id": "2302.12297"
},
{
"id": "2308.01862"
},
{
"id": "2103.06268"
},
{
"id": "2302.13971"
},
{
"id": "2209.12106"
},
{
"id": "2304.05613"
},
{
"id": "2207.08143"
},
{
"id": "2306.08997"
},
{
"id": "2111.02840"
},
{
"id": "2305.15005"
},
{
"id": "2303.12528"
},
{
"id": "1707.06875"
},
{
"id": "2305.01210"
},
{
"id": "2201.11990"
},
{
"id": "2305.14938"
},
{
"id": "2306.06331"
},
{
"id": "2305.08322"
},
{
"id": "2306.09841"
},
{
"id": "2307.09042"
},
{
"id": "2306.04563"
},
{
"id": "2307.06281"
},
{
"id": "2306.10512"
},
{
"id": "2306.13651"
},
{
"id": "2304.08354"
},
{
"id": "2306.04181"
},
{
"id": "2309.05922"
},
{
"id": "2310.03214"
},
{
"id": "2306.05087"
},
{
"id": "2306.06687"
},
{
"id": "2303.18223"
},
{
"id": "1904.09675"
},
{
"id": "2205.00445"
},
{
"id": "2311.15296"
},
{
"id": "2306.09265"
},
{
"id": "2302.04023"
},
{
"id": "2307.16125"
},
{
"id": "2205.12255"
},
{
"id": "2305.17926"
},
{
"id": "2306.04528"
},
{
"id": "2307.16789"
},
{
"id": "2303.16421"
},
{
"id": "2304.00723"
},
{
"id": "2306.07622"
},
{
"id": "2309.07045"
},
{
"id": "2212.02774"
},
{
"id": "2109.07958"
},
{
"id": "2306.06264"
},
{
"id": "2303.12057"
},
{
"id": "2306.01694"
},
{
"id": "2204.01906"
},
{
"id": "2302.06476"
},
{
"id": "2307.02046"
},
{
"id": "2305.14251"
},
{
"id": "2306.04308"
},
{
"id": "2204.02311"
},
{
"id": "1810.04805"
},
{
"id": "2305.12421"
},
{
"id": "2304.03439"
},
{
"id": "2306.14565"
},
{
"id": "2305.16934"
},
{
"id": "2309.09150"
},
{
"id": "2309.12284"
},
{
"id": "2206.07682"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2306.15261"
},
{
"id": "2305.11792"
},
{
"id": "2307.09705"
},
{
"id": "2211.01910"
},
{
"id": "2301.12867"
},
{
"id": "2303.08774"
},
{
"id": "2109.00859"
},
{
"id": "2203.13474"
},
{
"id": "2306.03090"
},
{
"id": "2012.15723"
},
{
"id": "2305.18365"
},
{
"id": "2307.04657"
},
{
"id": "2111.08181"
},
{
"id": "2104.08663"
},
{
"id": "2305.01181"
},
{
"id": "2112.00861"
},
{
"id": "2303.08896"
},
{
"id": "2305.15268"
},
{
"id": "2305.14975"
},
{
"id": "1804.07461"
},
{
"id": "2309.11737"
},
{
"id": "2304.01852"
},
{
"id": "2309.01219"
},
{
"id": "2306.05685"
},
{
"id": "2306.05783"
},
{
"id": "2201.08239"
},
{
"id": "2307.13692"
},
{
"id": "2307.02477"
},
{
"id": "2306.05715"
},
{
"id": "2302.11382"
},
{
"id": "2305.11262"
},
{
"id": "2306.01248"
},
{
"id": "2204.04991"
},
{
"id": "2306.08302"
}
] |
2307.03172 | 70 | 20 Total Retrieved Documents (~4K tokens)
70 Accuracy e ul fea) fo} fo} fo} w o 20 1st 5th 10th Position of Document with the Answer 15th 20th =â@â Llama-2-7b-chat-hf =@-» Llama-2-7b-hf =@- Llama-2-13b-chat-hf © @ +» Llama-2-13b-hf =@- Llama-2-70b-chat-hf -@-» Llama-2-70b-hf
Figure 16: Multi-document QA performance (20 total documents) of Llama-2 models of varying sizes (7B, 13B, 70B parameters), with and without additional su- pervised fine-tuning and reinforcement learning from human feedback (â-chat-â models).
# F Token Counts | 2307.03172#70 | Lost in the Middle: How Language Models Use Long Contexts | While recent language models have the ability to take long contexts as input,
relatively little is known about how well they use longer context. We analyze
the performance of language models on two tasks that require identifying
relevant information in their input contexts: multi-document question answering
and key-value retrieval. We find that performance can degrade significantly
when changing the position of relevant information, indicating that current
language models do not robustly make use of information in long input contexts.
In particular, we observe that performance is often highest when relevant
information occurs at the beginning or end of the input context, and
significantly degrades when models must access relevant information in the
middle of long contexts, even for explicitly long-context models. Our analysis
provides a better understanding of how language models use their input context
and provides new evaluation protocols for future long-context language models. | http://arxiv.org/pdf/2307.03172 | Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang | cs.CL | 18 pages, 16 figures. Accepted for publication in Transactions of the
Association for Computational Linguistics (TACL), 2023 | null | cs.CL | 20230706 | 20231120 | [
{
"id": "2302.13971"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2201.08239"
},
{
"id": "2205.14135"
},
{
"id": "2306.13421"
},
{
"id": "2302.00083"
},
{
"id": "2211.08411"
},
{
"id": "2305.14196"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2112.09118"
},
{
"id": "2301.12652"
},
{
"id": "2205.05131"
},
{
"id": "2208.03188"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.