doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.17563 | 9 | 2
# Preprint
generation (Sachan et al., 2022). Figure 1 (a) shows the prompt used for relevance generation. The relevance score si is deï¬ned as:
1 + p(Yes), if output Yes 1 â p(No), if output No where p(Yes) and p(No) denote the probabilities of LLMs generating âYesâ and âNoâ respectively. Query generation approach asks LLMs to generate a query based on the document, and measures the probability of generating the actual query. Readers can refer to (Sachan et al., 2022) for more details.
There are two major issues with pointwise approaches. First, pointwise relevance prediction requires the model to output calibrated pointwise predictions so that they can be used for comparisons in sorting. This is not only very difï¬cult to achieve across prompts, but also unnecessary for ranking, which only requires relative ordering. In fact, the entire learning to rank ï¬eld (Liu, 2009) is based on this observation. Also, pointwise methods will not work for generation API, which is common, such as GPT-4, since it requires the log probability of the desired predictions to perform sorting.
2.2 LISTWISE APPROACHES | 2306.17563#9 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 9 | A study was presented by Wu (Wu, 2023) to perform a comparison of corpus - either human or AI generated - Evaluation and Detection to answer the research question, âHow close is ChatGPT to human experts?â, by conducting a comprehensive human evaluations and linguistic analysis compared with that of humans.
In the recent years, the students are evaluated online using online exam tools instead of asking for the presidential exam. This trend becomes even more useful during and after Covid-19 pandemic. Most of the recent scientist are focusing on the research about ChatGPTâs performance to find out how ChatGPT performs for the university levels examination questions. Similarly, Thurzo et al., (Thurzo et al., 2023) provides brief analysis of use of AI in dentistry, and how it affects the foundations of dental education, including essay, thesis, research paper writing. | 2307.00112#9 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 10 | performance gain from self-bootstrapping is lower compared to adding high-quality outputs generated by other LLMs to the preference ranking sequence.
# 2 Preliminary
We commence by providing a brief review of RLHF. In order to train LLM to generate responses that align with human preferences, RLHF consists of three stages, which are outlined as follows:
The ï¬rst stage is Supervised Fine-tuning (SFT): Labelers furnish the desired behaviorâs response with t tokens, denoted as y = y1,··· ,t, for a given input prompt, denoted as x. Subsequently, RLHF proceeds to ï¬ne-tune a pre-trained LLM using su- pervised learning (maximum likelihood) on this data, resulting in a model denoted as ÏSFT:
LSFT = â logPÏSFT(yt|x, y1,··· ,tâ1). t (1) | 2306.17492#10 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 10 | 2.2 LISTWISE APPROACHES
Very recently, two parallel works explore listwise approaches, by directly inserting the query and a list of documents into a prompt. Both methods feed a partial list of 10 or 20 documents every time and perform a sliding window approach due to the prompt length constraints. Figure 1 (b) shows a simpliï¬ed version of the listwise ranking prompt. Both works explored text-davinci-003, i.e., In- structGPT (Ouyang et al., 2022) with 175B parameters, showing signiï¬cantly worse performance than ï¬ne-tuned baseline rankers. (Sun et al., 2023) were able to further explore gpt-3.5-turbo (the model behind ChatGPT) and GPT-4. Only the GPT-4 based approach could achieve competitive re- sults, which is based on the blackbox, commercial, and giant (1T estimated parameters (VanBuskirk, 2023; Baktash & Dawodi, 2023)) system, without academic publication discussing technical details.
The issues are again due to the difï¬culty of the listwise ranking task for LLMs. (Sun et al., 2023) show that there are frequent prediction failures with the following patterns, especially for smaller models: | 2306.17563#10 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 10 | One of the most common examinations in USA is United States Medical Licensing Examination (USMLE). A few papers are presented in state of art articles to test the capability of ChatGPT for the USMLE questions. Gilson (Gilson et al., 2022) evaluated the performance of ChatGPT and interpreted the responses on the USMLE Step 1 and Step 2 exams, finding a performance greater than 60% threshold on the step 1 exam making it a viable medical education tool. Medical education tools analysis is an important topic in the education field. So, we dig further in the USMLE exam to evaluate the performance of this Artificial Intelligence language model. Likewise, Kung (Kung et al., 2022) evaluated the ChatGPT performance on the USMLE, where they achieved scores at or near the passing threshold for all three exams without specialized training. Their research suggested ChatGPT potential in medical education and clinical decision-making, though ethical considerations like bias and transparency must be addressed. | 2307.00112#10 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 11 | LSFT = â logPÏSFT(yt|x, y1,··· ,tâ1). t (1)
In the second phase, the SFT model is utilized by providing prompts «x to generate pairs of responses. These pairs are then presented to human labelers, who express their preferences by indicating a fa- vored answer as y', while the other response is denoted as y. Specifically, we have y! > y? | x to represent the preferences of human labelers. To predict these preferences, the previous work em- ploy the Bradley-Terry (BT) model, which defines the preference probability as follows:
exp (ro(x,y")) BR â et exp (re(a.y")) + exp (ra(@,Â¥)) (2)
This objective is framed as a binary classiï¬ca- tion problem to train the reward model: LBT = â log Ï(rÏ(x, y1) â rÏ(x, y2)), where Ï is the lo- gistic function, rÏ is the reward model. During the third phase, RLHF utilizes the acquired rÏ to provide feedback to ÏSFT. Speciï¬cally, RLHF for- mulates the following optimization problem: | 2306.17492#11 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 11 | Missing: When LLMs only outputs a partial list of the input documents. ⢠Rejection: LLMs refuse to perform the ranking task and produce irrelevant outputs. ⢠Repetition: LLMs output the same document more than once. ⢠Inconsistency: The same list of documents have different output rankings when they are
fed in with different order or context.
In fact, we tried the exact same prompt from (Sun et al., 2023) on the FLAN-UL2 model with 20B parameters, and found very few of the outputs to be usable. The model will either just output few documents (e.g., "[1]"), an ordered list based on id (e.g. "[1] > [2] > [3] ..."), or text which is not parseable.
Different from pointwise approaches, listwise approaches can only use the generation API â getting the log probability of all listwise permutations is prohibitively expensive. In other words, there is no good solution if the generation API does not output desired results, which is common. These methods will fall back to the initial ranking, and due to the high failure rate, the results are highly sensitive to input ordering. | 2306.17563#11 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 11 | Not only medical schools, Choi (Choi, 2023) assessed ChatGPT ability to complete law school exams without human assistance using four real exams from the University of Minnesota Law School. Blind grading revealed that ChatGPT performed at the level of a C+ student, obtaining a passing grade in all courses. There are few concerns researchers raised about the integrity of the exam because of the use of ChatGPT. (Susnjak, 2022) revealed that high-quality text generation with minimal input poses a significant threat to exam integrity, particularly in tertiary education. The researcher also suggested that ChatGPT can generate such answers and suggested some potential countermeasures to the threat of exam integrity. Another concern was raised by (Kasneci et al., 2023), where they discussed the benefits and challenges of using large language models in education, such as creating personalized learning experiences. However issues like potential bias and misuse must be addressed. | 2307.00112#11 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 12 | max Ïθ E rÏ(x, y) â β log Ïθ(y | x) ÏSFT(y | x) (3)
Here, β controls the deviation from the base ref- erence policy ÏSFT to maintain generation diver- sity and prevent from generating of only high- reward but meaningless answers. It is important to note that this objective is non-differentiable and is typically optimized using reinforcement learn- ing methods, such as Proximal Policy Optimiza- tion (PPO) (Schulman et al., 2017). Consequently,
RLHF has been criticized for several drawbacks, including increased complexity compared to super- vised learning, sensitivity to hyperparameters, and the requirement for additional training of reward models and value networks.
# 3 Methodology
In this section, we ï¬rst derive the evolution from the Bradley-Terry comparison to our proposed PRO, achieving a shift from reward model-oriented preference alignment to aligning the probabilis- tic ranking of n responses generated by the LLM with the human preference ranking. This alignment helps to avoid the numerous drawbacks associated with RLHF. Furthermore, we demonstrate the ï¬exi- bility of our PRO in its ability to integrate with the reward model, thereby attaining advantages such as affordable preference ranking, differentiated con- trast, and self-bootstrapping augmentation. | 2306.17492#12 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 12 | These observations are not entirely surprising. Existing popular LLMs are generally not speciï¬cally pre-trained or ï¬ne-tuned against ranking tasks. However, we next show that LLMs do have a sense of pairwise relative comparisons, which is much simpler than requiring a calibrated pointwise relevance estimation or outputting a permutation for a list of documents.
# 3 PAIRWISE RANKING PROMPTING
We propose pairwise ranking prompting (PRP) for ranking with LLMs. We describe the basic pair- wise prompting unit, how it supports both generation and scoring APIs, and propose several variants of PRP with different ranking strategies and efï¬ciency properties.
3
# Preprint | 2306.17563#12 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 12 | Technological advancements in AI and natural language processing have made it possible to develop AI prompt models like ChatGPT that can be used for reasoning on different texts. The Medical Education field is very vast and including books, the internet also serves as a reference for the answers. But can we also depend on Artificial Intelligence for these answers? Can prompt models, especially ChatGPT answer medical questions? If yes, how accurate are the answers? What type of questions does it answer best? Does ChatGPT categorize complexity of questions? Can it be used to train medical personnel as well as the patients? We perform this study to analyze if ChatGPT can complete prevalent tasks related to complex medical and clinical information. In this study, we perform a detailed analysis of ChatGPT on the United States Medical Licensing Examination (USMLE) step one as well as an ethical questionnaire from Harvard University to find the trend in the accuracy and the possibility of using it as an assistant in e-learning. | 2307.00112#12 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 13 | # 3.1 From RLHF to PRO
Upon re-evaluating the process of RLHF men- tioned above, it becomes evident that the criticized shortcomings of RLHF stem from its utilization of a reward model as a learning proxy. Therefore, if we eliminate the proxy and directly optimize the LLM to learn the objective of the reward model, can we circumvent the aforementioned challenges? For this purpose, we re-examine the objective of the reward model, Bradley-Terry (Equation 2), which aims to make the model understand y! > y? through score comparison. For a given prompt x, assuming that there are only two responses, y! and y, in the LLM response space, the reward model should prefer y!. Naturally, if we expand the response space of the LLM, for example, there exist n possible responses {yâ}, and the human- annotated order yb" *" is yt > y? + --» > y. We define the partial order between y! and candidates behind it as yb?" = y! > {y?,--- ,yâ}, then the objective of Bradley-Terry becomes:
1,2:n) >) _ _ ©XP (r(x,y')) Pyâ | 2) + exp (ray) (4) | 2306.17492#13 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 13 | 3
# Preprint
Given a query "what worth", which of the following two passages is more relevant to the query? is reba mcentireâs net Passage A: Reba Mcentire. Reba Mcentire Net Worth is $65 Million. Reba McEntire is a country music star and actress, originally from Oklahoma, with an estimated net worth of $65 million dollars. Reba McEntire began performing on the rodeo circuit and was discovered by Red Ste. Reba Nell McEntire (born Mar... generation mode LLM Generated text: "Passage A" Passage B: Born March 28, 1955, in McAlester, Oklahoma, Reba McEntire got her break singing the national anthem at the 1974 rodeo ï¬nals. McEntire has recorded with Mercury and MCA records, topped the country charts numerous times, and been named best female vocalist by the Country Music Association multiple times. Output Passage A or Passage B: scoring mode Target text: "Passage A" Score: -0.0012 Target text: "Passage B" Score: -6.9116
Figure 2: An illustration of pairwise ranking prompting. The scores in scoring mode represent the log-likelihood of the model generating the target text given the prompt.
3.1 PROMPTING DESIGN | 2306.17563#13 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 13 | MATERIALS AND METHODS Data Collection Based on the objective of the study, the paper incorporated United States Medical Licensing Examination (USMLE) questionnaire (Murphy, 1995). The standard USMLE questions set of steps one consisted of n = 119 total questions and n = 10 short answer questions. To understand the further potential of the ChatGPT, we tested final year questions of Gross Anatomy course of the Harvard University. As the chatbot applications performance depends on how we ask the question, the questions need to be reformatted and screen. In the study, the next step after collecting the questions is to reformat and screen the questions. ChatGPT only allowed to ask question using text, therefore, the images, charts, and formulae should be removed. The ChatGPT is language model based on the GPT (Generative Pre-trained Transformer) architecture, proving images and concern data is not suitable. 45 among 119 questions consist of images and charts and formulae, therefore, 45 questions are screened. The screened USMLE questions set was n =74. To analyze the critical reasoning answer of ChatGPT we added 40 questions related to Ethics section of medical exam from Amboss platform (www.amboss.com ). | 2307.00112#13 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 14 | 1,2:n) >) _ _ ©XP (r(x,y')) Pyâ | 2) + exp (ray) (4)
Furthermore, it is important to acknowledge that this objective does not fully leverage the rankings ybâ since it only characterizes y! > y?,---y", disregarding the n â 2 valuable rankings such as y? > y3,---yâ and y"-! > yâ. Consequently, we
Human: How do I attract butterflies to my garden? Assistant: How would you like them to be attracted? Human: They are pretty. Assistant: SFT + rip reser + i al a Candidate Responses re >
Figure 2: The pipeline of PRO for Human Feedback Alignment learning. Each candidate is concatenated with the prompt ï¬rst, then processed by the LLM to estimate corresponding rewards, which are optimized by Equation 7.
propose an extension to Equation 4 as follows:
Py" ⢠| a) = 1? (y+ | 3) (5) exp ( y*)) i | Dai=k EXD (7 aa) | 2306.17492#14 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 14 | 3.1 PROMPTING DESIGN
Our pairwise ranking prompt is simple and intuitive, as shown in Figure 2. This pairwise prompting will serve the basic computation unit in all PRP variants, which we denote as u(q, d1, d2) for a query q and two documents d1 and d2.
PRP naturally supports both generation API and scoring API. The latter is made possible since we only have two expected outputs ("Passage A" and "Passage B") for LLM inquiries. Furthermore, as we focus on open-sourced LLMs, getting probabilities from LLMs is simple. Since using scoring mode can mitigate potential issues when the generation API generates irrelevant outputs, our main results are based on the scoring mode. We will provide some comparisons between these two modes in Section 4.6.
Since it is known that LLMs can be sensitive to text orders in the prompt (Lu et al., 2022), for each pair of documents, we will inquire the LLM twice by swapping their order (u(q, d1, d2) and u(q, d2, d1)). We have a local ordering of d1 > d2 or d2 > d1 if both promptings make consistent decisions, and have d1 = d2 otherwise. | 2306.17563#14 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 14 | Q. In a sample of 100 individuals, the mean leukocyte count is 7500/mm3, with a standard deviation of 1000/mm3. If the leukocyte counts in this population follow a normal (gaussian) distribution, approximately 50% of individuals will have which of the following total leukocyte counts? : (A) 5500-9500/mm3 (B) <6500/mm3 or >8500/mm3 (C) 6500-â¬500/mm3 (D) <7500/mm3 (E) >9500/mm3 A. C) 6500-8500/mm3
Figure 1 Sample Question of USMLE Exam
On receiving the questions, we formatted MCQs with no additional prompts or clarifications. Further we grouped questions into subject like Microbiology, System, Pharmacology as per USMLE test coverings.
Questions Collection Short MCQs (n = 119) Format the """°"S__â_ Grading the Short Questions (n = 10) question input answers Screening Removed n = 45 questions due to MCQs images and data Added n = 45 ethics questions Find the Overall Screened MCQs (n = 74) performance performance Ethics MCQs (n = 45) Short Questions (n = 10)
Figure 2 : Schematic of workflow for sourcing, formatting the questions. | 2307.00112#14 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 15 | Py" ⢠| a) = 1? (y+ | 3) (5) exp ( y*)) i | Dai=k EXD (7 aa)
To impose the desired ranking presented by y! > yo > ++ > yâ, we use Equation 4 in a recur- sive manner where we start with the first response, treat the remaining responses as negatives, drop the current response, and move to the next. We repeat this procedure until there are no responses left. Surprisingly, this objective aligns closely with the ultimate goal of Human alignment, which is the task of selecting desired responses from the vast response space of LLMs (Rafailov et al., 2023). In other words, if m â oo, then Equation 5 is able to exhaustively explore all possible responses and an- notate y! as the most desired response, thus perfect alignment with humans.
(5) | 2306.17492#15 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 15 | Next we discuss three variants of PRP using pairwise ranking prompting as the computation unit. We note that pairwise comparison can serve as the basic computation unit of many algorithms (e.g., selection algorithm) and leave other alternatives for future work.
3.2 ALL PAIR COMPARISONS
We enumerate all pairs and perform a global aggregation to generate a score si for each document di. We call this approach PRP-Allpair. Speciï¬cally, we have:
si = 1 â Idi>dj + 0.5 â Idi=dj . Xj6=i Xj6=i (2)
Intuitively, if the LLM consistently prefers di over another document dj, di gets one point. When LLM is not sure by producing conï¬icting or irrelevant results (for the generation API), each docu- ment gets half a point. There might be ties for the aggregated scores, in which case we fall back to initial ranking. There could be other ways to weight the scoring function (such as leveraging prediction probabilities or initial ranks), which we leave for future work. | 2306.17563#15 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 15 | Figure 2 : Schematic of workflow for sourcing, formatting the questions.
Experiments ChatGPT responses instantly as it is trained and does not perform online searches unlike other chatbot out there in the internet. Because of this, the ChatGPT is regarded as suitable for handling long-range dependencies and generating coherent and contextually appropriate responses. The ChatGPT provides python API to ask the questions through which the questions can be asked iteratively. For the experiments, a client program is written using ChatGPT API to ask questions listed in a notepad file separating each question by new line and receive the answers from the ChatGPT. And we collected the response of ChatGPT in a separate file. We further proceed to analyze the generated response via ChatGPT.
For the short answer question, as the questions were in pdf format. Firstly, we extracted the questions in the text format similar to MCQ question in the figure 1. The same client program designed for MCQ question was used for the short questions. The response of the ChatGPT was collected in a separate file.
Source Check if the creen Send t Read text file S ener question needed Server to reformet Yes Format Questions Result Analysis
Figure 3 : Schematic workflow of the source question to ChatGPT API | 2307.00112#15 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 16 | (5)
The LLM ÏPRO computes the score of response yk by multiplying the probabilities of each token generated by ÏPRO itself. When Equation 5 is fully optimized, ÏPRO is able to consistently generate the most preferred response (with a higher output probability) from a candidate set, thereby capturing human preferences. In addition to adhering to hu- man preferences, it is also desirable for the model to generate ï¬uent replies. Therefore, we incorpo- rate the original supervised loss (Equation 1) that requires the model to ï¬t the responses considered the best by humans. Consequently, the overall opti- mization objective can be summarized as follows:
L(y1,··· ,n | x) = LPRO + βLSFT where LSFT is the NLL loss of the top 1 candidate and β is the hyper-parameter to maintain the bal- ance between text quality and human preference. LPRO is deï¬ned as:
Lpro = 3 los SS exp (Trpro(@; y ky) ok CXP (Tappo (@, y ")) (8) | 2306.17492#16 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 16 | PRP-Allpair favors simple implementation (all LLM API calls can be executed in parallel, while methods below will perform iterative local reï¬nements), and is highly insensitive to input ordering. The clear drawback is its costly O(N 2) calls to LLM APIs, where N is the number of documents to be ranked for each query.
4
# Preprint
Passage B Passage C · · · Passage D Passage E Passage A Passage B Passage C · · · Passage D Passage E Passage A Passage B Passage C · · · Passage D Passage A Passage E Passage B Passage C · · · Passage A Passage D Passage E Passage B Passage A · · · Passage C Passage D Passage E Passage A Passage B · · · Passage C Passage D Passage E
# Initial ranking:
# Final ranking:
Figure 3: An illustration of one pass of our sliding window approach. Starting from right to left, we compare each document pair and swap it if the LLM output disagrees with the initial ranking. We can see that the sliding window approach is able to bring up initially lower ranked "Passage A" (shown in green) to the top of the ranking. K such passes will ensure a high-performing top-K ranking.
3.3 SORTING-BASED | 2306.17563#16 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 16 | Source Check if the creen Send t Read text file S ener question needed Server to reformet Yes Format Questions Result Analysis
Figure 3 : Schematic workflow of the source question to ChatGPT API
Data Analysis A new chat session was created for each entry to handle diversities of the responses and reduce memory retention bias. The results were evaluated using 2-way ANOVA and post hoc and obtained the systematic covariation between format and question prompt. On top of this, thus obtained result was independently scored for Accuracy, Concordance, and Insight by physician adjudicators and gave overall performance.
RESULTS AND DISCUSSION It was interesting to see that out of a total of 74 logical questions, 43 of them were correctly answered, 29 of them were answered incorrectly, and 2 of them were not answered at all. In the end, this leads to a rate of accuracy of 58.18% in the overall model. This shows that ChatGPT needs to improve its ability to answer logical questions to be able to serve as a useful analytical tool in the future.
Table 1: Performance test based on the type of question.
SN Total Questions Not Answer Correct Answers Wrong Answer Accuracy (%) Logical Questions 74 2 43 29 58.18 Critical reasoning (Ethics part) 45 0 25 20 60 | 2307.00112#16 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 17 | Lpro = 3 los SS exp (Trpro(@; y ky) ok CXP (Tappo (@, y ")) (8)
Based on this motivation, we propose the Prefer- ence Ranking Optimization (PRO) algorithm. In- stead of optimizing the LLM to approximate the Reward Model, we propose directly training the LLM to reach Equation 5. Figure 2 demonstrates the pipeline of PRO algorithm. Speciï¬cally, we deï¬ne rÏPRO(x, yk) as the function parameterized by our desired LLM ÏPRO:
|y"| âTy wd D108 P mero ( (yf |e, yht) Trpro (J y*)
(6)
By comparing Equation 7 and Equation 3, it can be observed that, at the training objective level, PRO and RLHF have similar structures, but PRO is more efï¬cient. Both PRO and RLHF share the primary goal of human alignment. RLHF achieves this by providing better responses through a re- ward model with higher discrete scores, requiring RL techniques. In contrast, PRO directly achieves this through ranking scores, thereby avoiding many drawbacks associated with RL. The second objec- tive of both PRO and RLHF is to ensure high- quality model outputs. PROâs alignment objective | 2306.17492#17 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 17 | 3.3 SORTING-BASED
We note that efï¬cient sorting algorithms, such as Quicksort and Heapsort, depend on pairwise com- parisons and thus ï¬t perfectly with PRP. We can use the pairwise preferences from LLMs as the comparator for sorting algorithms. We use Heapsort in this paper due to its guaranteed O(N log N ) computation complexity. We call this approach PRP-Sorting.
PRP-Sorting favors lower computation complexity than PRP-Allpair while also being large insen- sitive to input orders. However, since it performs local comparisons and swaps on-the-ï¬y, its per- formance needs to be empirically evaluated compared to the global aggregation approach in PRP- Allpair.
3.4 SLIDING WINDOW
We introduce a sliding window approach that is able to further bring down the computation com- plexity. One sliding window pass is similar to one pass in the Bubblesort algorithm: Given an initial ranking, we start from the bottom of the list, compare and swap document pairs with a stride of 1 on-the-ï¬y based on LLM outputs. One pass only requires O(N ) time complexity. See Figure 3 for an illustration. | 2306.17563#17 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 17 | SN Total Questions Not Answer Correct Answers Wrong Answer Accuracy (%) Logical Questions 74 2 43 29 58.18 Critical reasoning (Ethics part) 45 0 25 20 60
It is worth noting that some questions were left unanswered by ChatGPT. Specifically, there were four questions that were not answered in the Logical Questions section, and one question each in the Biostatistics and Immunology sections. It may also be worth investigating why ChatGPT was not able to answer these questions and whether there are any patterns or commonalities among them.
ChatGPT gave wrong answers for a total of 38 questions. It may be worth investigating whether there are any patterns or commonalities among these questions, such as certain types of questions or subjects where ChatGPT struggles to provide accurate answers.
As far as the critical reasoning questions related to ethics are concerned, among 45 questions, 25 were answered correctly, 20 were answered incorrectly, and no question was not answered at all. It appeared that ChatGPT has improved its ability to handle critical reasoning questions compared to itself answering logical questions. However, there is still room for improvement in the ability to handle critical reasoning questions with ChatGPT. In course wise evaluation, the ChatGPT provided high accuracy on Microbiology and immunology with 80% and 75% respectively and the minimum accuracy rate of 20% was obtained for Anatomy. The figure 3 shows the bar diagram of the accuracy and the performance on each course. | 2307.00112#17 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 18 | is differentiable, allowing for multi-task learning by combining alignment and SFT objectives through single-stage training. On the other hand, RL, due to the discrete optimization problem, requires training the SFT model ï¬rst and then constraining the RL model from deviating excessively from SFT. Con- sequently, RLHF necessitates two-stage training, which undoubtedly increases training costs.
Comparing Equation 1, Equation 7, and Equa- tion 2, we can observe that PRO is more data- efficient. SFT can only leverage responses consid- ered as desired in a preference ranking, completely disregarding negative responses. We believe nega- tive examples are crucial in human alignment since LLM should not only learn what is good but also discern what is not. The critical component of RLHF, the reward model, is trained through pair- wise response comparisons, requiring (?) compar- isons for a ranking of length n. In contrast, PRO only needs n â 1 comparisons and introduces more negative examples in each comparison compared to RLHF. Therefore, PRO provides better and more stable score estimates since more negative exam- ples enlarge the response space, making the rank- ing process for obtaining the desired response more aligned with human expectations. | 2306.17492#18 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 18 | By noticing that ranking usually only cares about Top-K ranking metrics, where K is small, we can perform K passes. For N = 100 and K = 10, it still only requires 10% LLM API calls of the PRP-Allpair. We call this approach PRP-Sliding-K.
PRP-Sliding-K has favorable time complexity but will be sensitive to input order, especially for small Ks. In experiments we show surprisingly good results with PRP-Sliding-10, without being sensitive to input ordering.
3.5 REMARKS
We focus on open-sourced LLMs that are easily accessible to academic researchers, and do not require inquiry of commercial LLM APIs, alleviating some monetary constraints. Also, the LLMs do not need to be ï¬netuned in the zero-shot setting. However, we do acknowledge the cost to prompting LLMs in general.
Here we brieï¬y summarize the properties of pointwise, pairwise, and listwise ranking promptings in Table 1, showing pairwise ranking prompting has several favorable properties.
5
# Preprint
Table 1: Comparison of pointwise, listwise, and pairwise approaches. N is the number of documents to be ranked for each query. O(N ) for Listwise approach is based on sliding window since other options are not practical. | 2306.17563#18 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 18 | 80% 75% 70% 70% 60% 58% 55% 50% 40% 20% Anatomy Biochem Biostat Ethics Immunology Microbio Pathology Pharmacology Physiology _ Systems 25 20 7 7 7 5 4 4 3 3 3 3 3 3 see a F fe 28 ll = 1 | O m= = | | Wi correct Answers [Ml Wrong Answers [i Not Answers Wi Accuracy
Figure 4 : Performance test based on the type of subject.
To identify any patterns or trends in the performance of the Chat GPT across different course that could be derived from the data presented in the table, we conducted a statistical analysis of the data presented in the table. Firstly, the mean accuracy rate across all subjects was calculated to be 59.5%, indicating that on average ChatGPT accurately answered slightly more than half of the questions posed.
Post-hoc tests were performed after an initial statistical analysis, such as ANOVA, to determine which pairs of groups differ significantly from each other. In this case, the accuracy rates across subjects were analyzed using a one-way ANOVA. This analysis indicates a significant difference in accuracy rates across subjects (F(9, 1) = 9.10, p < 0.05). The diagonal entries are all equal to "-" because they represent the comparison of a subject with itself which is not meaningful. | 2307.00112#18 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 19 | We also observe that PRO establishes a bridge between human alignment and contrastive learning. Contrastive learning has recently shown signiï¬cant advancements in the ï¬eld of self-supervised learn- ing (Oord et al., 2018), where the main objective is to maximize the similarity between a query and its corresponding positive instance, while creating a distance from other negatives (Sohn, 2016). In the context of PRO, we model similarity as the parameters of the language model to measure the likelihood of generating a response. We expect that this modeling approach will encourage future researchers to fully explore the extensive research achievements in contrastive learning, ultimately achieving better human alignment.
# 3.2 Grafting RLHF onto PRO
While PRO can optimize directly on the human- annotated preference ranking sequence without the need for introducing concepts like the reward model in RLHF, we have found that grafting RLHF onto PRO can bring more ï¬exibility to PRO. We outline three possible upgrades as follows: Affordable Preference Ranking.
PRO is highly ï¬exible, relying solely on an arbitrarily long | 2306.17492#19 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 19 | Method # of LLM API Calls Generation API Scoring API Require Calibration O(N ) O(N ) Pointwise Listwise Pairwise O(N 2), O(N log N ), O(N ) No Yes Yes Yes No Yes Yes No No
# 4 EXPERIMENTS
4.1 DATASETS AND METRICS
TREC is a widely used benchmark dataset in information retrieval research. We use the test sets of the 2019 and 2020 competitions: TREC-DL2019 and TREC-DL2020, which provides dense human relevance annotations for each of their 43 and 54 queries. Both use the MS MARCO v1 passage corpus, which contains 8.8 million passages. All comparisons are based on the reranking of top 100 passages retrieved by BM25 (Lin et al., 2021) for each query. This is the same setting as existing work (Sun et al., 2023; Ma et al., 2023).
4.2 METHODS | 2306.17563#19 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 20 | PRO is highly ï¬exible, relying solely on an arbitrarily long
ranked preference sequence. The source of the sequence is unrestricted, allowing for various pos- sibilities. One approach involves requesting annota- tors to imagine multiple responses of different qual- ity. Alternatively, a more efï¬cient method entails utilizing different existing LLMs, such as ChatGPT and Alpaca, to generate multiple responses. These responses can then be ranked using an additional reward model rÏ, similar to RLHF.
Differentiated Contrast. The formulation of LPRO, as shown in Equation 8, treats all responses yi ⺠yk as negative examples of yk and applies the same penalty to them. However, this approach may not be reasonable, especially when the prefer- ence scores of different yi are similar. For instance, when the preference of yk+1 is only slightly worse than yk, while yn is signiï¬cantly worse than yk, the model should differentiate and apply differ- ent penalty strengths, slightly penalizing yk+1 and heavily penalizing yn compared to yk. To address this, we propose using the score rÏ(x, yi) from a re- ward model rÏ to indicate the numerical preference of yi, and modify Equation 8 as follows: | 2306.17492#20 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 20 | 4.2 METHODS
We evaluate PRP variants based on open-sourced LLMs, including FLAN-T5-XL, FLAN-T5- XXL (Chung et al., 2022), and FLAN-UL2 (Tay et al., 2022a), which have signiï¬cantly smaller model sizes (3B, 11B, 20B) than alternatives, and are accessible even to academic researchers. We report PRP variants including PRP-Allpair, PRP-Sorting, and PRP-Sliding-K.
We consider the following supervised baselines, all trained on the MS MARCO dataset:
⢠monoBERT (Nogueira & Cho, 2019): A cross-encoder re-ranker based on BERT-large.
⢠monoT5 (Nogueira et al., 2020): A sequence-to-sequence re-ranker that uses T5 to calcu- late the relevance score with pointwise ranking loss.
⢠RankT5 (Zhuang et al., 2023): A re-ranker that uses T5 and listwise ranking loss.
We also consider the following zero-shot LLM-based baselines:
⢠Unsupervied Passage Re-ranker (UPR) (Sachan et al., 2022): The pointwise approach based on query generation. | 2306.17563#20 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 20 | Biostat - 0.119 0.001 0.660 0.119 0.847 0.001 0.311 0.716 Pathology 0.119 - 0.003 0.052 0.735 0.233 0.001 0.039 0.512 Microbio 0.001 0.003 - 0.019 0.003 0.843 0.001 0.001 0.141 Systems 0.660 0.052 0.019 - 0.052 0.827 0.019 0.535 0.981 Biochem 0.119 0.735 0.003 0.052 - 0.735 0.001 0.039 0.512 Ethics 0.847 0.233 0.843 0.827 0.735 - 0.843 0.843 0.993 Anatomy 0.001 0.001 0.001 0.019 0.001 0.843 - 0.001 0.141 Immunology 0.311 0.039 0.001 0.535 0.039 0.843 0.001 - 0.648 0.512 0.141 0.981 0.512 0.993 0.141 0.648 - Physiology 0.973 0.254 0.035 0.525 0.254 0.997 0.035 0.175 0.827 0.973 | 2307.00112#20 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 21 | n-1 exp (âaGe) Lpro = »~ log n Tmo (zy") 0) k=l jen eXP ( Th )
where
T i>k k = 1 rÏ(x, yk) â rÏ(x, yi) (10)
T k k = min i>k T i k (11)
When the difference between rÏ(x, yk) and rÏ(x, yi) increases, the preference gap between yk and yi becomes more evident. Consequently, the temperature T i k decreases, amplifying the penalty of positive example yk towards yi, while it de- creases when the difference is smaller. T k k is deï¬ned as the minimum temperature among all the negative examples to maintain a balance be- tween the numerator and denominator. Our ex- periments (§4.7) reveal that the dynamic temper- ature design signiï¬cantly enhances model perfor- mance when optimizing LPRO alone while exclud- ing LSFT. It also provides some performance gains when jointly optimizing LPRO and LSFT.
Self-bootstrapping Augmentation. Further- more, it is worth noting that the length of se- quences that PRO relies on is variable. In other words, there is no requirement for ï¬xed sequences | 2306.17492#21 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 21 | ⢠Unsupervied Passage Re-ranker (UPR) (Sachan et al., 2022): The pointwise approach based on query generation.
⢠Relevance Generation (RG) (Liang et al., 2022): The pointwise approach based on rele- vance generation.
⢠RankGPT (Sun et al., 2023): The listwise prompting based approach using various GPT based LLMs.
⢠Listwise Reranker with a Large language model (LRL) (Ma et al., 2023): A similar ap- proach to RankGPT with slightly different prompt design.
4.3 MAIN RESULT
Main result is shown in Table 2. Overall we are able to achieve very encouraging results using PRP. We have the following observations:
⢠PRP variants based on FLAN-UL2 with 20B parameters can achieve best results on all metrics on TREC-DL2020, and are only second to the blackbox, commercial gpt-4 based solution on NDCG@5 and NDCG@10 on TREC-DL2019, which has an estimated 50X times model size. Our best methods outperform RankGPT based on text-davinci-003 with 175B parameters by over 10% on all ranking metrics, and outperform supervised methods on almost all ranking metrics.
6
# Preprint | 2306.17563#21 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 22 | during training. This allows us to consider graft- ing the self-bootstrapping advantage of RLHF as a subset onto PRO. Speciï¬cally, RLHF aims to continuously evaluate the modelâs responses dur- ing training by employing a reward model. Posi- tive or negative rewards are provided to bootstrap the Language Model itself. Similarly, with PRO, given the prompt x and the current model, we sample a response Ëy and add it to the existing re- sponse set {y1, · · · , yn}. Subsequently, we re-rank the responses using the reward model, yielding p(Ëy1,··· ,n+1 | x). Therefore, further optimization can be performed by refreshing Equation 7:
LPRO(y1,··· ,n | x) â LPRO(Ëy1,··· ,n+1 | x)
The abstract training procedures are as follows:
# Algorithm 1: Self-bootstrap PRO | 2306.17492#22 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 22 | 6
# Preprint
Table 2: Results on TREC-DL2019 and TREC-DL2020 datasets by reranking top 100 documents retrieved by BM25. Best model is in boldface and second best is underlined for each metric. All zero-shot LLM methods use BM25 to resolve prediction conï¬icts or failures. *OpenAI has not publicly released the model parameters and the numbers are based on public estimates (VanBuskirk, 2023; Baktash & Dawodi, 2023) | 2306.17563#22 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 22 | # Pharmacology 0.716
This means that the difference between the mean accuracy rate of "Biostat" and "Pathology" is not statistically significant at the 0.05 level since the p-value (0.119) is greater than 0.05. Similarly, the post hoc accuracy between Microbiology and Pharmacology is 0.141 demonstrating itâs not statistically significant. Which inferred that the difference between the mean accuracy rate of "Microbio" and "Pharmacology" is
not statistically significant at the 0.05 level, since the p-value (0.141) is greater than 0.05. On the other hand, the entry in the row labeled "Microbio" and the column labeled "Physiology" is "0.035", which means that the difference between the mean accuracy rate of "Microbio" and "Physiology" is statistically significant at the 0.05 level, since the p-value (0.035) is less than 0.05.
Table 3: Performance for the short questions.
SN Physician I Physician II Physician III Not Answers Grade 100% 100% 100% CGPA A A A | 2307.00112#22 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 23 | The abstract training procedures are as follows:
# Algorithm 1: Self-bootstrap PRO
Input: Language Model 7, Reward Model Tos Raw Dataset D, Output: The fine-tuned LM zpro 1 Split D into {Do, Di, ..., Dkâ1} 2 for Di ⬠{Do, D1, .... Dk-1} do 3 for Sample d ⬠D; do x < Prefix (d) {yâ } â Candidates (d) 9 < mâ¢m(x) // Sampling from LM Add Â¥ to {y} Score and re-rank {y} with x and rg end for 10 Txt â PRO(mim, Di) // Train LLM u_end for 4 5 6 7 8 9 12 TPRO <â mi
# 4 Experiments
# 4.1 Datasets | 2306.17492#23 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 23 | Method LLM Size TREC-DL2019 TREC-DL2020 NDCG@1 NDCG@5 NDCG@10 NDCG@1 NDCG@5 NDCG@10 BM25 NA NA 54.26 52.78 50.58 57.72 50.67 47.96 Supervised Methods 70.74 69.40 72.32 72.99 monoBERT monoT5 monoT5 RankT5 78.70 77.47 80.25 80.86 79.07 79.84 79.07 77.38 73.25 73.77 73.74 73.94 Zero-Shot LLM Methods BERT T5 T5 T5 340M 220M 3B 3B 70.50 71.48 71.83 71.22 67.28 66.99 68.89 69.49 - 48.36 58.76 66.76 74.11 62.05 66.40 61.50 66.85 72.22 69.44 69.00 74.16 71.28 70.76 74.73 72.52 75.35 - 50.78 69.77 82.17 82.56 62.79 67.05 53.10 70.93 74.03 77.52 75.58 72.09 74.42 64.73 73.64 74.42 78.29 65.80 49.76 61.50 65.80 75.59 62.00 64.48 | 2306.17563#23 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 23 | Table 3: Performance for the short questions.
SN Physician I Physician II Physician III Not Answers Grade 100% 100% 100% CGPA A A A
We wanted to check if ChatGPT works well on short answer questions. We asked ChatGPT ten short answer questions which was asked in Harvard Medical exam of Gross Anatomy of First Year Final exam. We firstly asked ChatGPT those questions and extracted those answers in a word file. Secondly, we asked three physicians to evaluate those answers. Based on their evaluation we found out the ChatGPT scored an A from the evaluation of all three physicians.
Table 4: Google VS ChatGPT
SN Google ChatGPT Correct answer 12 15 Wrong answer 5 4 Not Answered 3 1
For Google, out of 20 total questions (12 correct, 5 wrong, 3 unanswered), the overall accuracy rate would be 60%. For ChatGPT, out of 20 total questions (15 answered, 4 correct, 1 wrong), the overall accuracy rate would be 80%. Some bias can occur which can impact the performance of both systems. Sample bias: The questions asked may not be representative of the full range of topics and difficulty levels that are covered by the subject matter. For example, if the questions were all on a particular subtopic within a larger subject area, then the performance of Google and ChatGPT on those questions may not be indicative of their overall performance on the course. | 2307.00112#23 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 24 | # 4 Experiments
# 4.1 Datasets
We conduct experiments mainly based on Human Preference Data about Helpfulness and Harm- lessness, i.e., HH-RLHF described in Bai et al. (2022a). It has 4 sub-sets, namely Harmlessbase, Helpfulbase, Helpfulonline and Helpfulrejection, where each sample contains two different conversations rated by human annotators and is grouped into train/test splits. We refer to the code2 released by OpenAssistant and ï¬lter all data to ensure that the chosen and rejected conversations in the same sam- ple have identical contexts but different responses. Details can be found in Table 1.
We combine training data from 4 sub-sets to ï¬ne-tune models and evaluate them on each of the
2https://github.com/LAION-AI/Open-Assistant | 2306.17492#24 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 24 | 72.09 74.42 64.73 73.64 74.42 78.29 65.80 49.76 61.50 65.80 75.59 62.00 64.48 58.95 64.61 69.75 69.28 68.66 69.87 67.81 67.00 72.42 71.88 72.65 - 50.77 64.73 71.15 79.16 62.07 65.41 57.68 66.81 71.73 71.88 71.23 71.28 69.62 69.49 74.77 73.60 75.49 text-davinci-003 175B LRL RankGPT gpt-3 175B text-davinci-003 175B RankGPT RankGPT 154B* gpt-3.5-turbo gpt-4 1T* RankGPT FLAN-T5-XXL 11B UPR RG FLAN-T5-XXL 11B 20B FLAN-UL2 UPR RG 20B FLAN-UL2 3B FLAN-T5-XL PRP-Allpair 3B FLAN-T5-XL PRP-Sorting PRP-Sliding-10 FLAN-T5-XL 3B FLAN-T5-XXL 11B PRP-Allpair PRP-Sorting FLAN-T5-XXL 11B | 2306.17563#24 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 25 | We combine training data from 4 sub-sets to ï¬ne-tune models and evaluate them on each of the
2https://github.com/LAION-AI/Open-Assistant
test sets, while we do validation with 280 samples randomly selected from all test data. Each sample from the raw dataset contains a chosen conversation and a rejected one, which constitutes a relatively short ranking. To further evaluate the performance of different models on longer human preference rankings, we enhance each sample with additional responses from Alpaca (Taori et al., 2023) and Chat- GPT3, thereby expanding the range of ranked can- didates. We refer to these augmented datasets as HH-RLHFLLM,i, where LLM represents the lan- guage models used (Alpaca, ChatGPT, etc.), and i denotes the length of the rankings. The unmodiï¬ed dataset is referred to as HH-RLHFraw.
# 4.2 Evaluation Metrics | 2306.17492#25 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2307.00112 | 25 | determine its performance characteristics, including the United States Medical Licensing Examination (USMLE) step one as well as an ethical questionnaire from Harvard University. We found two major themes emerging from the analysis: (1) the increasing accuracy of ChatGPT; and (2) the possibility of using this AI in a way to assist e-learning. USMLE pass threshold is not constant, it changes every year and on general level it should be 60%. On our study we see ChatGPT obtained 58.8% on logical questions and 60% on ethical one. With this we see the ChatGPT is approaching the passing range for logical question and has touched the threshold for ethical one. The result from this study obtained accuracy quite high compared to GPT LLM (Liévin et al., 2023) which has achieved 46% accuracy with zero prompting and extensive prompt tuning got to 50%. Also, when comparing ChatGPT for MCQs based exam it achieved 53.8% which is 5% below to the accuracy the study achieved. The possibility of using this AI in a way to assist e-learning. Considering the results of the study, it was found that the AI-generated answers were more context-oriented and a better role model for a deductive | 2307.00112#25 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 26 | # 4.2 Evaluation Metrics
We present the ï¬ndings of our study using various evaluation methods: automatic, model-based, and human-based metrics. In our main experiment, we utilize BLEU (Papineni et al., 2002) to assess the text quality and the Reward model to measure the level of human preference gained. These metrics allow us to evaluate the performance of numer- ous models automatically. For the analysis exper- iment, we employ human evaluators to conduct pairwise comparisons among the top-performing models identiï¬ed through automated evaluations. Human evaluation is the gold standard for assess- ing human preferences (Zhou et al., 2023). An annotator judge is presented with a question and two responses and tasked with determining the bet- ter option or declaring a tie. Furthermore, recent studies have shown that GPT-4 (OpenAI, 2023) ef- fectively evaluates the responses of chat assistants and aligns with human preferences (Zheng et al., 2023; Wang et al., 2023). Consequently, we involve GPT-4 to select a model generation from the two options. To mitigate positional bias (Zheng et al., 2023; Wang et al., 2023), we evaluate each candi- date in both positions during two separate runs, and the ï¬nal score is computed as the average of the two runs.
# Implementation Detail | 2306.17492#26 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 26 | ⢠Results on FLAN-T5-XL and FLAN-T5-XXL are also competitive, showing that PRP gen- eralizes to smaller LLMs. They are generally comparable with the gpt-3.5.turbo based solution (10X - 50X in size) and performs better than text-davinci-003 based solution.
⢠We in general see an upward trend when we increase the model size using our proposed methods, showing pairwise ranking prompting can indeed leverage LLMsâ capabilities from their scaling sizes. We suspect the slight inconsistency from FLAN-T5-XL to FLAN- T5-XXL is due to their tuning procedures1.
⢠It is encouraging to see good results from efï¬cient PRP variants, alleviating efï¬ciency concerns of pairwise ranking approaches.
4.4 MORE RESULTS ON PRP-SLIDING-K
We show more results on PRP-Sliding-K variants to better understand the behaviors, including mul- tiple backward passes and a forward pass variant2. The results are shown in Table 3 and Table 4 on TREC-DL2019 and TREC-DL2020, showing consistent behaviors.
Table 3: Sliding window results on the TREC-DL2019 dataset. | 2306.17563#26 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 26 | way to assist e-learning. Considering the results of the study, it was found that the AI-generated answers were more context-oriented and a better role model for a deductive reasoning process compared to the Google search results. In approximately 90% of the outcomes generated by ChatGPT, there was at least one meaningful insight that was present in at least one of the responses, suggesting that ChatGPT could be a useful platform for enhancing the delivery of e-learning. In a study conducted by (Choi et al., 2023), ChatGPT was found to achieve an average grade of a C+ in all four courses, achieving a low but passing grade for each. It's clear from the above that using ChatGPT the student should be able to get a passing score in the exam. Anatomy was the subject where our accuracy was a minimum of 20% and Biostatistics was a minimum of 40%, the rest of the subjects were above 50%. We can also conclude that with ChatGPT, passing marks are possible. After all, every student's first goal should be to pass the examination to graduate. Overall, we found the results of these studies suggest that despite the ability of ChatGPT to answer both logical and critical reasoning questions related to ethics, there still appears to be room | 2307.00112#26 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 27 | # Implementation Detail
In this work, we choose LLaMA-7B (Touvron et al., 2023) as the backbone model, which has become a widespread test ï¬eld for LLM research (Taori et al., 2023). We ï¬ne-tune it with PRO algorithm built on Huggingface.Library (Wolf et al., 2020).
We calculate BLEU scores to compare inference
3https://chat.openai.com/
Sub-set # train # test Harmlessbase Raw Filtered 42537 42536 2312 2312 Helpfulbase Raw Filtered 43835 43835 2354 2354 Helpfulonline Raw Filtered 22007 22002 1137 1137 Helpfulrejection Raw Filtered 52421 52420 2749 2749
Table 1: Statistics of HH-RLHFraw. | 2306.17492#27 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 27 | Table 3: Sliding window results on the TREC-DL2019 dataset.
Method LLM Strategy NDCG@1 NDCG@5 NDCG@10 1 Forward PRP-Sliding FLAN-UL2-20B PRP-Sliding FLAN-UL2-20B 1 Backward PRP-Sliding FLAN-UL2-20B 2 Backward PRP-Sliding FLAN-UL2-20B 3 Backward PRP-Sliding FLAN-UL2-20B 10 Backward 63.95 78.29 78.29 78.29 78.29 57.31 62.15 67.01 70.72 75.49 54.10 57.58 61.52 64.60 72.65
The results are easy to interpret:
1https://twitter.com/hwchung27/status/1668729544701001729 2Backward pass indicates starting from the bottom result with the lowest BM25 score, and vice versa.
7
# Preprint
Table 4: Sliding window results on the TREC-DL2020 dataset. | 2306.17563#27 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 27 | we found the results of these studies suggest that despite the ability of ChatGPT to answer both logical and critical reasoning questions related to ethics, there still appears to be room for improvement in terms of its accuracy as well. There is a pressing need for further research to better understand how ChatGPT can be used to answer different types of questions and to identify strategies that can improve its performance in the future. | 2307.00112#27 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 28 | Table 1: Statistics of HH-RLHFraw.
results with human-selected responses in test sets. To capture human preferences, reward models are used as proxies. Additionally, we expand the train- ing set by incorporating output results from exist- ing LLMs, requiring the sorting of the expanded preference rankings. However, manual sorting is time-consuming and costly, especially considering the large number of instances in the training set. Therefore, we employ an additional reward model to score and rearrange all candidate rankings dur- ing the pre-processing stage of training. To avoid unfairness in evaluation, we select two different reward models for training and evaluation, which 5, respectively. we denote as RMtrain Reward values from RMeval are normalized with the Sigmoid function in case RMeval provides ex- treme values that excessively inï¬uence the overall performance. | 2306.17492#28 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2307.00112 | 28 | # Conclusion
In conclusion, this study aimed to assess the reliability of ChatGPT in answering complex medical and clinical questions. The results indicate that ChatGPT shows potential as a valuable tool for e-learners, but there is still room for improvement in terms of its accuracy. The analysis revealed that ChatGPT-generated answers were more context-oriented and demonstrated better deductive reasoning abilities compared to regular Google search results. However, the study found that ChatGPT needs to enhance its performance in answering logical questions to become a useful analytical tool. The study also highlighted the importance of further research to explore how ChatGPT can effectively answer different question types and to identify strategies for improving its performance. Overall, ChatGPT's performance in this study suggests its potential to assist in e-learning, but ongoing advancements are needed to optimize its accuracy and broaden its applications in the field of medicine and clinical practice.
# References:
Borji, A. (2023). A Categorical Archive of ChatGPT Failures.
Choi, J. H. a. H., Kristin E. and Monahan, Amy and Schwarcz, Daniel B. (2023). ChatGPT Goes to Law School. 23-03. Minnesota https://doi.org/http://dx.doi.org/10.2139/ssrn.4335905 Legal Studies Research Paper No. | 2307.00112#28 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 29 | Moreover, we assign β, the weight SFT loss, to 0.05 â (l â 1)2 where l is the ranking length. The sequence length, epoch, and learning rate are set to 512, 2, and 5e-6, respectively, while the maximum number of new tokens generated during inference is 128. We deploy our complete framework using 8 devices, 7 of which are dedicated to the model training, while the remaining one houses RMtrain for validation and potential self-bootstrapping aug- mentation (We consider this augmentation strategy as an analytical experiment and, unless otherwise speciï¬ed, augmentation will not be enabled). With a batch size of 2 per device, we leverage a gradient accumulation step of 8, resulting in a total batch size of 112. More particulars can be found in our code.
4https://huggingface.co/OpenAssistant/oasst-rm-2.1- pythia-1.4b-epoch-2.5
5https://huggingface.co/OpenAssistant/oasst-rm-2- pythia-6.9b-epoch-1
# 4.4 Baselines | 2306.17492#29 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 29 | ⢠The behavior is similar to BubbleSort: Strong NDCG@1 can already be achieved with one backward pass. As we conduct more passes, other Top-K ranking metrics get better.
⢠Forward pass does not work well, which is intuitive, since it mainly performs demotion and is much less efï¬cient in bringing good results to the top.
# 4.5 ROBUSTNESS TO INPUT ORDERING
One issue of listwise ranking prompting approaches is their sensitivity to input ordering. This is because the ranking will fall back to the initial order when LLM prediction fails, which is very common for the difï¬cult listwise methods. In Table 5 we show results of different methods by inverting the initial order from BM25.
# Table 5: Input order sensitivity results on the TREC-DL2019 dataset. | 2306.17563#29 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 29 | Gilson, A., Safranek, C., Huang, T., Socrates, V., Chi, L., Taylor, R. A., & Chartash, D. (2022). How Does ChatGPT Perform on the Medical Licensing Exams? The Implications of Large Language Models for Medical Education 2022.2012.2023.22283901. https://doi.org/10.1101/2022.12.23.22283901
Gratas, B. (2023, 3/3). 50 ChatGPT Statistics and Facts You Need to Know.
# https://blog.invgate.com/chatgpt- statistics#:~:text=It%20was%20trained%20on%20a,summarization%2C%20and%20que stion%2Danswering | 2307.00112#29 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 30 | We compare PRO with zero-shot baselines, and models ï¬ne-tuned on LLaMA-7B (Touvron et al., 2023) which share the same backbone with PRO: LLaMA (Touvron et al., 2023) is a collection of prevalent foundation models released to enhance research on LLM techniques of training, inference, and widespread applications. We evaluate the 7B version of LLaMA (LLaMA-7B) to be consistent with other ï¬ne-tuned baselines. Curie (Brown et al., 2020a) is considered as the 6.7B version of GPT-3, which has a similar size to LLaMA-7B. The model name used in API calls is text-curie-001. Alpaca (Taori et al., 2023) is an instruction-tuned version of LLaMA based on 52K instruction- following data. is estimated to have a similar instruction-following competence with text-davinci-003 on the Self-Instruct evaluation suite (Wang et al., 2022). ChatGLM (Du et al., 2022) is an bilingual chatbot with 6.2B parameters. Having been implemented on GLM architecture (Du et al., | 2306.17492#30 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 30 | # Table 5: Input order sensitivity results on the TREC-DL2019 dataset.
Method LLM Init Order NDCG@1 NDCG@5 NDCG@10 RankGPT RankGPT gpt-3.5-turbo gpt-3.5-turbo BM25 Inverse BM25 82.17 36.43 71.15 31.79 65.80 32.77 PRP-Allpair PRP-Allpair FLAN-UL2-20B FLAN-UL2-20B Inverse BM25 BM25 73.64 74.42 74.77 74.48 72.42 72.40 PRP-Sliding-1 PRP-Sliding-1 FLAN-UL2-20B FLAN-UL2-20B Inverse BM25 BM25 78.29 71.32 62.15 32.72 57.58 26.04 PRP-Sliding-10 FLAN-UL2-20B PRP-Sliding-10 FLAN-UL2-20B Inverse BM25 BM25 78.29 71.32 75.49 67.91 72.65 64.84
As expected, PRP-Allpair is quite robust to initial ordering, and PRP-Sliding-1 will suffer for metrics other than NDCG@1. PRP-Sliding-10 is quite robust since it focuses on Top-K ranking metrics. | 2306.17563#30 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 30 | Kasneci, E., SeÃler, K., Kuechemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., & Kasneci, G. (2023). ChatGPT for Good? On Opportunities 102274. and https://doi.org/10.1016/j.lindif.2023.102274
Khanna, A., Pandey, B., Vashishta, K., Kalia, K., Bhale, P., & Das, T. (2015). A Study of Today's A.I. through Chatbots and Rediscovery of Machine Intelligence. International Journal of u- and Service, https://doi.org/10.14257/ijunesst.2015.8.7.28 Science and Technology, 8, e- 277-284. | 2307.00112#30 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 31 | (Du et al., 2022) is an bilingual chatbot with 6.2B parameters. Having been implemented on GLM architecture (Du et al., 2022) and trained with SFT and RLHF on a large-scale conversation dataset, it manifests great potential of being in line with human preference. We implement it with its ofï¬cial code. ChatGPT is an online chat platform developed by OpenAI, which possesses great human-like abil- ities and allows versatile uses completed in the conversation form, after RLHF ï¬ne-tuning. SFT is the basic method that naively selects the top 1 candidate to ï¬ne-tune language models. Note that if we choose the best response in a preference ranking sequence sorted by a reward model, known as best-of-n sampling, SFT evolves into BoN. RLHF is successively promoted by Ziegler et al. (2019) and Ouyang et al. (2022a) to align the core of language models with human preference in Re- inforcement Learning settings. We implement SFT and RLHF according to trlx. CoH (Liu et al., 2023) enforces language models to differentiate the most preferred candidate from the least preferred with | 2306.17492#31 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 31 | # 4.6 COMPARISON OF SCORING MODE AND GENERATION MODE
Our results above are all based on the scoring mode, since PRP only need to get scores for two can- didate outputs ("Passage A" and "Passage B") and it is easy to get probabilities from open-sourced LLMs. Here we compare against PRP performance using scoring vs generation mode in Table 6, which will shed light on how PRP works on generation only LLM APIs.
Table 6: Results on TREC-DL2019 and TREC-DL2020 datasets using scoring vs generation mode for PRP. Method | 2306.17563#31 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 31 | Kung, T., Cheatham, M., Medinilla, A., Sillos, C., Leon, L., Elepano, C., Madriaga, M., Aggabao, R., Diaz- Candido, G., Maningo, J., & Tseng, V. (2022). Performance of ChatGPT on USMLE: Potential for AI- Assisted Models. https://doi.org/10.1101/2022.12.19.22283643
Opara, E. (2023). CHATGPT FOR TEACHING, LEARNING AND RESEARCH: PROSPECTS AND CHALLENGES.
Susnjak, T. (2022). ChatGPT: The End of Online Exam Integrity? https://doi.org/10.48550/arXiv.2212.09292
Terwiesch, C. (2023). Would Chat GPT3 Get a Wharton MBA? A Prediction Based on Its Performance in the Operations Management Course. In U. o. P. Mack Institute for Innovation Management at the Wharton (Ed.). School | 2307.00112#31 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 32 | and RLHF according to trlx. CoH (Liu et al., 2023) enforces language models to differentiate the most preferred candidate from the least preferred with prompts, which actually aligns models with human preference from a se- mantic perspective. We implement it with Hug- gingface.Library (Wolf et al., 2020) according to its original version. RRHF (Yuan et al., 2023) takes candidate ranking into account, and distinguishes different candidates through pair-wise ranking losses. We implement it | 2306.17492#32 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 32 | Table 6: Results on TREC-DL2019 and TREC-DL2020 datasets using scoring vs generation mode for PRP. Method
TREC-DL2019 TREC-DL2020 LLM Mode NDCG@1 NDCG@5 NDCG@10 NDCG@1 NDCG@5 NDCG@10 PRP-Allpair FLAN-T5-XL PRP-Allpair FLAN-T5-XL PRP-Allpair FLAN-T5-XXL Scoring PRP-Allpair FLAN-T5-XXL Generation PRP-Allpair FLAN-UL2 PRP-Allpair FLAN-UL2 Scoring Generation Scoring Generation 74.03 74.03 72.09 72.09 73.64 73.64 71.73 71.68 71.28 71.61 74.77 74.84 69.75 69.59 69.87 69.94 72.42 72.37 79.01 79.01 82.41 80.56 85.19 85.19 72.22 71.54 74.16 73.69 74.73 74.74 68.12 67.75 69.85 69.53 70.68 70.69 | 2306.17563#32 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 32 | Thurzo, A., Strunga, M., Urban, R., Surovkova, J., & ???????????? (2023). Impact of Artificial Intelligence on Dental Education: A Review and Guide for Curriculum Update. Education Sciences, 13, 150. https://doi.org/10.3390/educsci13020150
Wallace, R. S. (2009). The Anatomy of A.L.I.C.E. In R. Epstein, G. Roberts, & G. Beber (Eds.), Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer (pp. 181- 4020-6710-5_13 210).
Weizenbaum, J. (1966). ELIZAâa computer program for the study of natural language communication between man and machine. Commun. ACM, 9(1), 36â45. https://doi.org/10.1145/365153.365168
Wu, B. G. a. X. Z. a. Z. W. a. M. J. a. J. N. a. Y. D. a. J. Y. a. Y. (2023). How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection. ArXiv. | 2307.00112#32 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 33 | Training Set Method Harmlessbase Helpfulbase Helpfulonline Helpfulrejection Total BLEU Reward BLEU Reward BLEU Reward BLEU Reward BLEU Reward Zero-shot 10.82 LLaMA 14.23 Curie 15.07 Alpaca ChatGLM 15.39 15.51 ChatGPT 51.16 50.71 53.03 63.30 71.44 12.78 17.33 19.68 20.16 21.38 31.71 45.51 49.80 59.14 65.94 15.02 17.11 18.77 30.99 29.81 38.91 51.36 55.74 61.10 67.94 14.60 18.99 22.21 25.41 26.52 34.85 48.68 53.72 61.45 68.39 13.13 16.99 19.12 21.99 22.56 38.94 48.71 52.72 61.27 68.48 HH-RLHFraw SFT RLHF CoH RRHF PRO 15.07 14.54 13.34 13.49 12.05 55.96 55.05 45.47 53.98 62.96 20.40 19.86 23.17 18.76 20.83 41.36 42.16 39.03 48.23 48.51 29.36 28.04 33.84 30.68 28.75 54.08 53.40 52.63 | 2306.17492#33 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 33 | We can see that PRP is extremely robust to scoring vs generation API, even for smaller LLMs, showing its generality to different LLMs systems. The results are intuitive - LLMs make few gener- ation mistakes due to the simplicity of PRP. We found that there are only about 0.02% predictions that do not follow the desired format, which is neglectable and in stark contrast to the the listwise approaches.
8
# Preprint
# 5 LIMITATIONS AND DISCUSSIONS
Cost and Efï¬ciency. We discussed different efï¬cient variants of PRP. Also, our results are based on LLMs that are easily approachable for academic researchers (Taori et al., 2023), alleviating the need to call commercial APIs. However, further reducing the number of calls to LLMs is still an interesting research direction, such as leveraging active learning techniques.
Domain adaptation. The datasets used in this paper are for the standard and important relevance- based text ranking. How LLMs can be adapted to non-standard ranking datasets, such as counter arguments in the ArguAna dataset (Wachsmuth et al., 2018), need more investigation. Our work can facilitate such explorations by providing approachable zero-shot baselines using open-source LLMs. | 2306.17563#33 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 33 | Abdul-Kader, S. A., & Woods, J. C. (2015). Survey on chatbot design techniques in speech conversation systems. International Journal of Advanced Computer Science and Applications, 6(7).
De Angeli, A., & Carpenter, R. (2005). Stupid computer! Abuse and social identities. In Proc. INTERACT 2005 workshop Abuse: The darker side of Human-Computer Interaction (No. 4, pp. 19-25).
Zhang, C., Zhang, C., Li, C., Qiao, Y., Zheng, S., Dam, S. K., ... & Hong, C. S. (2023). One small step for generative ai, one giant leap for agi: A complete survey on chatgpt in aigc era. arXiv preprint arXiv:2304.06488.
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, et al. 2020. Language Models Are Few- shot Learners. arXiv preprint arXiv:2005.14165 (2020)
Zarifhonarvar, A. (2023). Economics of chatgpt: A labor market view on the occupational impact of artificial intelligence. Available at SSRN 4350925. | 2307.00112#33 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17492 | 34 | 41.36 42.16 39.03 48.23 48.51 29.36 28.04 33.84 30.68 28.75 54.08 53.40 52.63 56.44 59.02 25.54 25.11 29.79 24.95 27.17 47.08 47.73 46.57 52.51 53.28 21.80 21.19 24.06 20.91 21.54 48.83 48.93 45.00 52.25 55.35 HH-RLHFAlpaca,3 BoN RLHF CoH RRHF PRO 16.75 16.33 13.71 12.79 14.41 59.24 56.61 47.36 54.18 62.60 22.81 23.12 22.45 19.21 22.47 54.04 54.85 42.34 53.23 54.38 29.89 30.54 33.17 31.53 25.61 61.00 60.97 53.19 59.04 60.90 27.76 27.94 28.76 25.14 26.82 58.04 58.4 48.61 56.76 58.26 23.7 23.82 23.54 21.02 22.11 57.66 57.28 47.15 55.39 58.72 HH-RLHFChatGPT,3 BoN RLHF CoH RRHF PRO 15.05 13.63 | 2306.17492#34 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 34 | Other Models. We do not use GPT models (though we compare with them using results from other papers) in this work. Testing the performance of our methods on such models is meaningful benchmarking effort.
Ranking-aware LLMs. We, as other existing work, focus on zero-shot ranking with off-the-shelf LLMs, and show that pairwise ranking is the ideal prompting unit. How to make LLMs more ranking-aware, in a data efï¬cient manner, while maintaining their generality for other tasks, is a challenging research direction.
No data leakage. We want to note that there is no data leakage problem in the ranking task evalu- ations. We mainly use FLAN models (Wei et al., 2021), which never observes the question-passage supervision needed for ranking training. This is in contrast to, e.g., some Question Answering (QA) datasets where the ground-truth QA pairs might be used to instruction ï¬ne-tune the LLMs. Also, the labels in the datasets are dense human annotations for each question answer pair. So our setting, which is the same as existing work, really measures LLMsâ capability to do comparative relevance ranking.
# 6 RELATED WORK | 2306.17563#34 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2307.00112 | 34 | Zarifhonarvar, A. (2023). Economics of chatgpt: A labor market view on the occupational impact of artificial intelligence. Available at SSRN 4350925.
Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Available at SSRN 4337484.
Kasneci, E., SeÃler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., ... & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. | 2307.00112#34 | Performance of ChatGPT on USMLE: Unlocking the Potential of Large Language Models for AI-Assisted Medical Education | Artificial intelligence is gaining traction in more ways than ever before.
The popularity of language models and AI-based businesses has soared since
ChatGPT was made available to the general public via OpenAI. It is becoming
increasingly common for people to use ChatGPT both professionally and
personally. Considering the widespread use of ChatGPT and the reliance people
place on it, this study determined how reliable ChatGPT can be for answering
complex medical and clinical questions. Harvard University gross anatomy along
with the United States Medical Licensing Examination (USMLE) questionnaire were
used to accomplish the objective. The paper evaluated the obtained results
using a 2-way ANOVA and posthoc analysis. Both showed systematic covariation
between format and prompt. Furthermore, the physician adjudicators
independently rated the outcome's accuracy, concordance, and insight. As a
result of the analysis, ChatGPT-generated answers were found to be more
context-oriented and represented a better model for deductive reasoning than
regular Google search results. Furthermore, ChatGPT obtained 58.8% on logical
questions and 60% on ethical questions. This means that the ChatGPT is
approaching the passing range for logical questions and has crossed the
threshold for ethical questions. The paper believes ChatGPT and other language
learning models can be invaluable tools for e-learners; however, the study
suggests that there is still room to improve their accuracy. In order to
improve ChatGPT's performance in the future, further research is needed to
better understand how it can answer different types of questions. | http://arxiv.org/pdf/2307.00112 | Prabin Sharma, Kisan Thapa, Dikshya Thapa, Prastab Dhakal, Mala Deep Upadhaya, Santosh Adhikari, Salik Ram Khanal | cs.CY, cs.AI | 12 pages, 4 Figues, 4 tables | null | cs.CY | 20230630 | 20230727 | [
{
"id": "2005.14165"
},
{
"id": "2304.06488"
}
] |
2306.17563 | 35 | # 6 RELATED WORK
We did a detailed review and analysis of the most relevant existing efforts for ranking with LLM, including pointwise and listwise approaches in Section 2. These works and ours focus on the chal- lenging zero-shot text ranking setting with LLMs without providing any examplers, conducting any ï¬ne-tuning, or training of an additional model. Prior to the recent efforts related to ranking with LLMs, most work focus on the supervised learning to rank problem (Liu, 2009; Qin et al., 2021) by ï¬ne-tuning Pre-trained Language Models (PLMs) such as T5 (Nogueira et al., 2020; Zhuang et al., 2023; Hui et al., 2022) or BERT (Nogueira & Cho, 2019; Zhuang et al., 2021), which serve as very strong baselines. | 2306.17563#35 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 36 | Table 2: Main Results. PRO consistently acquires more reward than all ï¬ne-tuned baselines, while is close to or even exceeding ChatGLM and ChatGPT.
with its ofï¬cial code.
# 4.5 Main Results
Table 2 contains the experimental results of com- parison between PRO and other baselines. To ver- ify that PRO has a globally competitive ability to capture human preference from rankings with di- verse lengths, we do experiments on HH-RLHFraw, HH-RLHFAlpaca,3 and HH-RLHFChatgpt,3, last 2 of which are augmented from HH-RLHFraw by Al- paca and ChatGPT, respectively.
In general, it can be found that LLaMA with ï¬ne- tuning has a notable improvement on BLEU and Reward against initial LLaMA, which has not un- dergone any speciï¬c alignment with human prefer- ence. Also, even without ï¬ne-tuning on HH-RLHF, models tuned on large-scale corpus still show cer- tain performance, while ChatGLM and ChatGPT with RLHF training beat LLaMA, Curie, and Al- paca that are trained from scratch. All of these prove the signiï¬cance of Human Alignment. | 2306.17492#36 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 36 | There has been a strong recent interest in exploring information retrieval in general with LLMs based approaches, due to the importance of the applications and the power of LLMs to understand textual queries and documents (Dai et al., 2022; Tay et al., 2022b; Wang et al., 2023; Jagerman et al., 2023; Bonifacio et al., 2022). Several works leverage the generation power of LLMs to generate train- ing data to train an additional downstream retrieval or ranking model, typically in the few-shot setting (Dai et al., 2022), which is a very different setting from ours. Recent methods in this fam- ily of methods such as Inpars (Bonifacio et al., 2022) still signiï¬cantly underperforms ï¬ne-tuned baselines. ExaRanker (Ferraretto et al., 2023) uses LLMs to generate explanations for ranking deci- sions, and uses such explanations in ranking model ï¬ne-tuning, showing limited performance bene- ï¬ts. HyDE (Gao et al., 2022) uses LLMs to augment queries by generating hypothetical documents for unsupervised retrieval. These works do not directly | 2306.17563#36 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 37 | Next, we compare different human alignment algorithms using the same backbone on the same dataset. Even in the most basic setting of HH- RLHFraw, where the ranking length is 2 with only one positive example and one negative example, PRO has already signiï¬cantly outperformed all baselines in terms of reward score while maintain- ing considerable BLEU scores. Speciï¬cally, compared to SFT, PRO improves the reward score by 6.52 points, and compared to the state-of-the-art human alignment algorithm RRHF, it improves the score by 3.1 points. This demonstrates that even without expanding the ranking sequence, PRO re- mains the best-performing approach. CoH achieves higher BLEU scores but falls short of PRO in terms of reward, which is mediocre. PRO exhibits a dis- tinct advantage in terms of Harmlessness compared to Helpfulness. We attribute this to the fact that achieving Harmlessness is comparatively easier for PRO as it primarily involves signiï¬cant features such as adapting expression styles and maintain- ing politeness in most conversations. On the other hand, Helpfulness typically demands more spe- ciï¬c suggestions, which pose a greater challenge for language models due to their limited world knowledge, thus increasing the difï¬culty in this aspect. | 2306.17492#37 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 37 | (Gao et al., 2022) uses LLMs to augment queries by generating hypothetical documents for unsupervised retrieval. These works do not directly explore the retrieval or ranking capability of LLMs, but mainly use LLMs as auxiliary tools to complement traditional paradigms, possibly limiting the beneï¬ts that LLMs can provide. New paradigms such as Differentiable Search Index (DSI) (Tay et al., 2022b; Wang et al., 2022) directly use Transformer memory to index documents for retrieval. Though novel, the performance gap from supervised baselines is still large. | 2306.17563#37 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 38 | When expanding the ranking sequence using existing LLMs and sorting it with an additional reward model (different from the evaluation re- ward model), we ï¬nd that the utilized LLM plays a crucial role in achieving concrete performances. Since ChatGPT surpasses Alpaca in understand- ing human preferences, it provides superior sam- ples during data augmentation compared to Al- paca, making HH-RLHFChatgpt,i intuitively better than HH-RLHFAlpaca,3. The performance of each | 2306.17492#38 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 38 | Our work shares spirit with several key techniques for LLMs, such as reward modeling using pair- wise preferences (Christiano et al., 2017).
9
# Preprint
# 7 CONCLUSION
In this paper, we propose to use pairwise prompting for ranking tasks. To the best of our knowledge, this is the ï¬rst time in the literature showing that very competitive ranking performance can be achieved using moderate-sized, open-sourced LLMs. The key insights are the observation of the difï¬culties of LLMs handling ranking tasks in the existing pointwise and listwise formulations. Our designed pairwise ranking prompting (PRP) is effective in reducing the burden of LLMs. We also discuss efï¬ciency concerns and ways to mitigate them, and several good properties of PRP.
This version is a preprint. Besides the directions we mentioned in Section 5, we are actively working on proposing more effective prompts, more efï¬cient ranking paradigms, and evaluating on more LLMs and datasets.
# REFERENCES
Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, and David Sontag. Large language models are zero-shot clinical information extractors. arXiv preprint arXiv:2205.12689, 2022. | 2306.17563#38 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 39 | method increases from HH-RLHFChatgpt,3 to HH- RLHFAlpaca,3. On the expanded sequences, we ob- serve that BoN (selecting the response with the highest reward model score for SFT) becomes a competitive baseline. This ï¬nding aligns with Rafailov et al. 2023, who observed that RLHF is less tuning-efï¬cient than BoN. The effective- ness of RRHF becomes less prominent because it relies on pairwise comparisons between candi- dates from given rankings. It fails to capture global differences corresponding to human preference in the long rankings, which can be achieved through Equation 5. Overall, in the expanded ranking, PRO remains the best-performing method, and the more powerful the LLM used for ranking augmentation, the more pronounced the improvement of PROâs performance. This surprising characteristic ï¬lls us with anticipation for PROâs future development.
# 4.6 Human and GPT-4 Evaluation | 2306.17492#39 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 39 | Jawid Ahmad Baktash and Mursal Dawodi. Gpt-4: A review on advancements and opportunities in natural language processing. arXiv preprint arXiv:2305.03195, 2023.
Inpars: Unsupervised dataset generation for information retrieval. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 2387â2392, 2022.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing sys- tems, 30, 2017. | 2306.17563#39 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 40 | # 4.6 Human and GPT-4 Evaluation
One could argue that the reward model fails to cap- ture all human preferences in evaluation. Human annotation is considered the most accurate evalu- ation method, and recently, GPT-4-as-a-judge has emerged as a scalable approach for rapidly assess- ing human preference. Therefore, in this section, we provide comprehensive evaluations conducted by both GPT-4 and humans. To address cost con- cerns, we primarily compare the performance of PRO against two alternative counterparts: PRO vs. Golden, i.e. the 1st candidate provided by the datasets, we aim to determine whether PRO trained on HH-RLHFraw can achieve or surpass human-preferred responses provided by the raw dataset. PRO vs. RRHF, both of which are trained on HH- RLHFraw.
We aim to verify whether PRO is truly preferred over RRHF in terms of human preferences, even in ranking sequences of length 2 that do not fully exploit PROâs capabilities. On the other hand, this comparison serves as evidence to some extent for the validity of the reward model we use in evalua- tion. | 2306.17492#40 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 40 | Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-ï¬netuned language mod- els. arXiv preprint arXiv:2210.11416, 2022.
Zhuyun Dai, Vincent Y Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B Hall, and Ming-Wei Chang. Promptagator: Few-shot dense retrieval from 8 examples. arXiv preprint arXiv:2209.11755, 2022.
Fernando Ferraretto, Thiago Laitz, Roberto Lotufo, and Rodrigo Nogueira. Exaranker: Explanation- augmented neural ranker. arXiv preprint arXiv:2301.10521, 2023.
Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. Precise zero-shot dense retrieval without relevance labels. arXiv preprint arXiv:2212.10496, 2022. | 2306.17563#40 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 41 | For GPT-4 evaluation, we ï¬rst sample contexts in each test set. We assemble two corresponding re- sponses from PRO and its counterparty into a mod- iï¬ed version of the prompt template from Zheng et al. (2023) for GPT-4 scoring. We also refer to Wang et al. (2023) to provide two candidates in bi- nary directions respectively, to eliminate unfairness
Sub-set Win Tie Lose n Harmlessbase Helpfulbase Helpfulonline Helpfulrejection Average e d l o G . s v O R P 60.00 77.50 27.50 55.00 55.00 5.00 0.00 12.50 0.00 4.37 35.00 22.50 60.00 45.00 40.63 F Harmlessbase H Helpfulbase R R Helpfulonline Helpfulrejection Average . s v O R P 62.50 70.00 45.00 65.00 60.62 10.00 0.00 5.00 0.00 3.75 27.50 30.00 50.00 35.00 35.63
Table 3: Results of GPT-4 Evaluation. We allow GPT- 4 to evaluate responses between PRO and golden sam- ples, as well as responses between PRO and RRHF. | 2306.17492#41 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 41 | Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, and Wayne Xin Zhao. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845, 2023.
Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pp. 9118â9147. PMLR, 2022.
Kai Hui, Honglei Zhuang, Tao Chen, Zhen Qin, Jing Lu, Dara Bahri, Ji Ma, Jai Gupta, Cicero dos Santos, Yi Tay, et al. Ed2lm: Encoder-decoder to language model for faster document re- ranking inference. In Findings of the Association for Computational Linguistics: ACL 2022, pp. 3747â3758, 2022.
10
# Preprint
Rolf Jagerman, Honglei Zhuang, Zhen Qin, Xuanhui Wang, and Michael Bendersky. Query expan- sion by prompting large language models. arXiv preprint arXiv:2305.03653, 2023. | 2306.17563#41 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 42 | Table 3: Results of GPT-4 Evaluation. We allow GPT- 4 to evaluate responses between PRO and golden sam- ples, as well as responses between PRO and RRHF.
Sub-set Win Tie Lose n Harmlessbase Helpfulbase Helpfulonline Helpfulrejection Average e d l o G . s v O R P 20.00 20.00 20.00 30.00 22.50 55.00 60.00 50.00 60.00 56.25 25.00 20.00 30.00 10.00 21.25 F Harmlessbase H Helpfulbase R R Helpfulonline Helpfulrejection Average . s v O R P 45.00 35.00 45.00 35.00 40.00 40.00 45.00 30.00 35.00 37.50 15.00 20.00 25.00 30.00 22.50
Table 4: Results of Human Evaluation.
triggered by candidate order. For Human evalua- tion, we employ human labelers to estimate the same samples with GPT-4 evaluation, and directly distinguish one response from another. | 2306.17492#42 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 42 | Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916, 2022.
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110, 2022.
Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th Annual International ACM SIGIR Con- ference on Research and Development in Information Retrieval (SIGIR 2021), pp. 2356â2362, 2021.
Tie-Yan Liu. Learning to rank for information retrieval. Foundation and Trends® in Information
Retrieval, 3(3):225â331, 2009. | 2306.17563#42 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 43 | triggered by candidate order. For Human evalua- tion, we employ human labelers to estimate the same samples with GPT-4 evaluation, and directly distinguish one response from another.
Table 3 and 4 give the detailed results, where both GPT-4 and Human more support PRO glob- ally for each comparison, thus highlighting the strengths of PRO. We are surprised to ï¬nd that both humans and GPT-4 consider the predictions of PRO to be better than the human-preferred re- sponses annotated in the dataset. This suggests that PRO is able to effectively capture the preferences of humans as reï¬ected in the annotated data. Fur- thermore, our evaluation using the reward model yielded consistent results, with both humans and GPT-4 signiï¬cantly favoring PRO over RRHF. This not only reafï¬rms the effectiveness of PRO but also demonstrates that our reward model can reasonably evaluate human preferences. | 2306.17492#43 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 43 | Tie-Yan Liu. Learning to rank for information retrieval. Foundation and Trends® in Information
Retrieval, 3(3):225â331, 2009.
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. Fantastically ordered prompts and where to ï¬nd them: Overcoming few-shot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 8086â8098, 2022.
Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and Jimmy Lin. Zero-shot listwise document rerank- ing with a large language model. arXiv preprint arXiv:2305.02156, 2023.
Rodrigo Nogueira and Kyunghyun Cho. Passage re-ranking with bert. arXiv preprint arXiv:1901.04085, 2019.
Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. Document ranking with a pre- trained sequence-to-sequence model. In Findings of the Association for Computational Linguis- tics: EMNLP 2020, pp. 708â718, 2020. | 2306.17563#43 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 44 | Traning set Method Harmlessbase Helpfulbase Helpfulonline Helpfulrejection Total HH-RLHFraw PRO âLSFT âT âLSFT â T 12.05 6.94 12.04 0.88 62.96 67.20 62.91 52.81 20.83 10.37 20.63 6.74 48.51 46.60 47.92 42.97 28.75 11.17 28.73 6.37 59.02 49.33 58.52 42.84 27.17 11.32 26.94 6.85 53.28 48.84 53.08 44.71 21.54 9.85 21.41 5.14 55.35 53.25 55.04 46.17 HH-RLHFAlpaca,3 PRO âLk>1 PRO âLSFT âT âLSFT â T 14.41 13.38 9.06 13.71 0.52 62.6 62.88 65.78 63.40 55.90 22.47 21.50 18.77 21.70 2.13 54.38 53.48 54.18 53.77 23.41 25.61 24.56 23.90 24.84 3.56 60.90 60.32 62.26 60.36 23.44 26.82 25.81 23.33 26.01 2.66 58.26 57.15 58.29 57.34 | 2306.17492#44 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 44 | Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to fol- low instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â27744, 2022.
Zhen Qin, Le Yan, Honglei Zhuang, Yi Tay, Rama Kumar Pasumarthi, Xuanhui Wang, Michael Bendersky, and Marc Najork. Are neural rankers still outperformed by gradient boosted decision trees? In International Conference on Learning Representations, 2021.
Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, and Luke Zettlemoyer. Improving passage retrieval with zero-shot question generation. arXiv preprint arXiv:2204.07496, 2022.
Is chat- gpt good at search? investigating large language models as re-ranking agent. arXiv preprint arXiv:2304.09542, 2023. | 2306.17563#44 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 45 | 60.32 62.26 60.36 23.44 26.82 25.81 23.33 26.01 2.66 58.26 57.15 58.29 57.34 23.82 22.11 21.10 18.29 21.34 2.05 58.72 58.11 59.71 58.40 32.33 HH-RLHFChatGPT,3 PRO âLk>1 PRO âLSFT âT âLSFT â T 15.53 15.20 13.81 15.77 5.93 73.08 72.64 73.18 72.99 69.61 22.30 21.94 21.28 22.13 5.22 64.78 64.44 64.20 65.34 33.92 29.35 29.17 27.90 29.03 9.33 66.66 66.97 67.15 67.48 31.81 27.49 27.29 26.57 27.28 6.11 66.95 66.80 66.76 67.54 33.52 23.07 22.80 21.84 22.98 6.25 67.97 67.75 67.84 68.40 43.16 | 2306.17492#45 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 45 | Is chat- gpt good at search? investigating large language models as re-ranking agent. arXiv preprint arXiv:2304.09542, 2023.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
Yi Tay, Mostafa Dehghani, Vinh Q Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, and Donald Metzler. Unifying language learning paradigms. arXiv preprint arXiv:2205.05131, 2022a.
Yi Tay, Vinh Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, et al. Transformer memory as a differentiable search index. Advances in Neural Information Processing Systems, 35:21831â21843, 2022b.
11
# Preprint | 2306.17563#45 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 46 | Table 5: Ablation results. We investigate the effectiveness of LPRO, LSFT and the dynamic temperature T .
# 4.7 Ablation Study
In this part, we investigate the effectiveness of each part in PRO, the results are contained in Table 5.
SFT Loss To avoid the model solely catering to the reward model at the expense of text qual- ity, we introduce LSFT. Therefore, removing LSFT lowers BLEU scores on three datasets, but higher quality of corpus can, to some extent, compen- sates for its drop, as is proved by results on HH- RLHFAlpaca,3 and HH-RLHFChatGPT,3 compared with HH-RLHFraw.
âe Ascending __ 4 âeâ Descending 67.14 âe Random âe Alpaca âe Chatgpt 63.75 eli 60.3 4 56.94
PRO Loss Table 2 also demonstrates the inï¬u- ence of LPRO, as excluding it in PRO essentially equals to SFT (BoN) that gets lower Reward.
Figure 3: Results of experiments on different ranking lengths. | 2306.17492#46 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 46 | 11
# Preprint
Adam VanBuskirk. Gpt-3.5 turbo vs gpt-4: Whatâs the difference? https://blog.wordbot.io/ai-artificial-intelligence/gpt-3-5-turbo-vs-gpt-4-whats-the- 2023. Accessed: 2023-06-06.
Henning Wachsmuth, Shahbaz Syed, and Benno Stein. Retrieval of the best counterargument with- out prior topic knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 241â251. Association for Computational Linguistics, 2018.
Liang Wang, Nan Yang, and Furu Wei. Query2doc: Query expansion with large language models. arXiv preprint arXiv:2303.07678, 2023.
Yujing Wang, Yingyan Hou, Haonan Wang, Ziming Miao, Shibin Wu, Qi Chen, Yuqing Xia, Cheng- min Chi, Guoshuai Zhao, Zheng Liu, et al. A neural corpus indexer for document retrieval. Advances in Neural Information Processing Systems, 35:25600â25614, 2022. | 2306.17563#46 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 47 | Figure 3: Results of experiments on different ranking lengths.
Adequate Ranking To fully leverage the rank- ingy! > yr > > y", we employ n â 1 loss functions to model y! > y?,--» ,y", y? > yyyâ... y | & yâ. Our objective is to ad- equately model all ranking orders and enable LLM to better differentiate between samples of different preferences. To validate this idea, we deactivate Lpro except for its first term, Lbro: Experimental results on three datasets consistently demonstrate a decrease in both BLEU and Reward scores, thus confirming the effectiveness of Equation 5.
model understand that some negative examples are neutral (with reward scores similar to positive ex- amples), and thus should not be overly penalized to avoid confusion during LLM training. Similarly, the inclusion of SFT loss also plays a similar role by increasing the weight of the best response.
# 4.8 Discussions
# 4.8.1 How about continually expanding Preference Ranking Sequence?
Temperature PRO With or without temperature (T ), the model performs well, but T slightly en- hances overall performance. Furthermore, we ob- serve a signiï¬cant drop in model performance when both the SFT loss and temperature are removed simultaneously, whereas removing either one in- dividually did not have such a noticeable impact. We believe this is because temperature helps the | 2306.17492#47 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17563 | 47 | Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.
Honglei Zhuang, Zhen Qin, Shuguang Han, Xuanhui Wang, Michael Bendersky, and Marc Najork. In Proceedings of the 2021 ACM SIGIR Ensemble distillation for bert-based ranking models. International Conference on Theory of Information Retrieval, pp. 131â136, 2021.
Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, and Michael Bendersky. Rankt5: Fine-tuning t5 for text ranking with ranking losses. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023.
12 | 2306.17563#47 | Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting | Ranking documents using Large Language Models (LLMs) by directly feeding the
query and candidate documents into the prompt is an interesting and practical
problem. However, there has been limited success so far, as researchers have
found it difficult to outperform fine-tuned baseline rankers on benchmark
datasets. We analyze pointwise and listwise ranking prompts used by existing
methods and argue that off-the-shelf LLMs do not fully understand these ranking
formulations, possibly due to the nature of how LLMs are trained. In this
paper, we propose to significantly reduce the burden on LLMs by using a new
technique called Pairwise Ranking Prompting (PRP). Our results are the first in
the literature to achieve state-of-the-art ranking performance on standard
benchmarks using moderate-sized open-sourced LLMs. On TREC-DL2020, PRP based on
the Flan-UL2 model with 20B parameters outperforms the previous best approach
in the literature, which is based on the blackbox commercial GPT-4 that has 50x
(estimated) model size, by over 5% at NDCG@1. On TREC-DL2019, PRP is only
inferior to the GPT-4 solution on the NDCG@5 and NDCG@10 metrics, while
outperforming other existing solutions, such as InstructGPT which has 175B
parameters, by over 10% for nearly all ranking metrics. Furthermore, we propose
several variants of PRP to improve efficiency and show that it is possible to
achieve competitive results even with linear complexity. We also discuss other
benefits of PRP, such as supporting both generation and scoring LLM APIs, as
well as being insensitive to input ordering. | http://arxiv.org/pdf/2306.17563 | Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michael Bendersky | cs.IR, cs.CL, cs.LG | 12 pages, 3 figures | null | cs.IR | 20230630 | 20230630 | [
{
"id": "2204.02311"
},
{
"id": "2305.03195"
},
{
"id": "2205.05131"
},
{
"id": "2205.12689"
},
{
"id": "2209.11755"
},
{
"id": "2211.09110"
},
{
"id": "2212.10496"
},
{
"id": "2305.08845"
},
{
"id": "2210.11416"
},
{
"id": "2304.09542"
},
{
"id": "2204.07496"
},
{
"id": "2305.02156"
},
{
"id": "2306.17563"
},
{
"id": "1901.04085"
},
{
"id": "2205.11916"
},
{
"id": "2109.01652"
},
{
"id": "2303.07678"
},
{
"id": "2305.03653"
},
{
"id": "2301.10521"
}
] |
2306.17492 | 49 | Training set Method Harmlessbase Helpfulbase Helpfulonline Helpfulrejection Total BLEU Reward BLEU Reward BLEU Reward BLEU Reward BLEU Reward HH-RLHFraw PRO PROs 12.05 16.84 62.96 59.27 20.83 22.34 48.51 48.22 28.75 30.13 59.02 58.23 27.17 28.21 53.28 53.41 21.54 23.77 55.35 54.20 HH-RLHFAlpaca,3 PRO PROs 14.41 13.44 62.6 62.44 22.47 21.18 54.38 52.82 25.61 23.01 60.90 59.07 26.82 25.36 58.26 56.51 22.11 20.68 58.72 57.44 HH-RLHFChatGPT,3 PRO PROs 15.53 15.53 73.08 73.16 22.30 22.02 64.78 65.34 29.35 29.04 66.66 67.18 27.49 27.49 66.95 67.41 23.07 22.96 67.97 68.36
Table 6: Results of diverse self-bootstrapping policies. | 2306.17492#49 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 50 | 5, followed by reranking using a reward model. Alpacas: Using Alpaca-7B, we generate 3 re- sponses, adding 1, 2, and 3 responses, respectively, to form ranking sequences of lengths 3, 4, and 5. ChatGPT: Using ChatGPT, we generate three re- sponses, adding 1, 2, and 3 responses, respectively, to form ranking sequences of lengths 3, 4, and 5. Ascending: We utilize three LLMs, namely Curie, Alpaca-7B, and ChatGPT. Based on the zero-shot results in Table 2, the quality of their responses can be ranked as ChatGPT > Alpaca-7B > Curie. In the Ascending setting, we add the responses in ascending order of quality. That is, for a se- quence of length 3, we added Curieâs response; for a sequence of length 4, we added Curie and Alpaca- 7Bâs responses; and for a sequence of length 5, we added Curie, Alpaca-7B, and ChatGPTâs responses. Descending: The data source is the same as As- cending, but the responses are added in the opposite order. For a sequence of length | 2306.17492#50 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
2306.17492 | 51 | responses. Descending: The data source is the same as As- cending, but the responses are added in the opposite order. For a sequence of length 3, we added Chat- GPTâs response; for a sequence of length 4, we added ChatGPT and Alpaca-7Bâs responses; and for a sequence of length 5, we added Curie, Alpaca- 7B, and ChatGPTâs responses. Random: The order of response additions is unre- lated to response quality and is done randomly. | 2306.17492#51 | Preference Ranking Optimization for Human Alignment | Large language models (LLMs) often contain misleading content, emphasizing
the need to align them with human values to ensure secur AI systems.
Reinforcement learning from human feedback (RLHF) has been employed to achieve
this alignment by combining a reward model, typically based on Bradley-Terry
paired comparison, with an RL algorithm such as Proximal Policy Optimization
(PPO) to optimize LLM responses. However, RLHF exhibits complexity,
instability, and sensitivity to hyperparameters. In this paper, we propose
Preference Ranking Optimization (PRO) as an alternative to PPO for directly
aligning LLMs with the Bradley-Terry comparison. PRO extends the pairwise
Bradley-Terry comparison to accommodate preference rankings of any length. By
iteratively contrasting the likelihood of generating responses, PRO instructs
the LLM to prioritize the best response while progressively ranking the
remaining responses. In this manner, PRO effectively transforms human alignment
into aligning the probability ranking of $n$ responses generated by LLM with
the preference ranking of humans towards these responses. Experiments have
shown that PRO outperforms existing alignment algorithms, achieving comparable
results to ChatGPT and human responses through automatic-based, reward-based,
GPT-4, and human evaluations. Furthermore, we demonstrate that longer, more
diverse, and higher-quality preference ranking sequences can consistently
enhance the performance of human alignment. | http://arxiv.org/pdf/2306.17492 | Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang | cs.CL, cs.AI | null | null | cs.CL | 20230630 | 20230630 | [
{
"id": "2302.13971"
},
{
"id": "2304.05302"
},
{
"id": "2302.05206"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2301.11774"
},
{
"id": "2210.10760"
},
{
"id": "2302.02676"
},
{
"id": "2305.16264"
},
{
"id": "1807.03748"
},
{
"id": "2304.08244"
},
{
"id": "2303.00001"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2305.08844"
},
{
"id": "2204.05862"
},
{
"id": "2306.01693"
},
{
"id": "2212.10560"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2106.05091"
},
{
"id": "1909.08593"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2305.11206"
},
{
"id": "2206.11871"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.