id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2307.13854#61
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
URL: http://openstreetmap.org OBJECTIVE: Show me the restaurants near ABC PREVIOUS ACTION: None example_assistant ```type [164] [restaurants near ABC] [1]``` Figure 10: The two examples provided as example_user and example_assistant for the direct agent. The agent directly emits the next action given the observation. Figure 11: Two examples where the GPT-4 agent failed, along with their screenshot and the accessibility tree of the relevant sections (grey). On the left, the agent fails to proceed to the â Usersâ section to accomplish the task of â Fork all Facebook reposâ ; on the right, the agent repeats entering the same search query even though the observation indicates the input box is filled. 21
|
2307.13854#60
|
2307.13854
|
[
"2112.09332"
] |
|
2307.12966#0
|
Aligning Large Language Models with Human: A Survey
|
3 2 0 2 l u J 4 2 ] L C . s c [ 1 v 6 6 9 2 1 . 7 0 3 2 : v i X r a # Aligning Large Language Models with Human: A Survey Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang Lifeng Shang, Xin Jiang, Qun Liu Huawei Noahâ s Ark Lab {wangyufei44,zhongwanjun1,liliangyou,mifei2,zeng.xingshan,wenyong.huang}@huawei.com {Shang.Lifeng,Jiang.Xin,qun.liu}@huawei.com # Abstract
|
2307.12966#1
|
2307.12966
|
[
"2307.03109"
] |
|
2307.12966#1
|
Aligning Large Language Models with Human: A Survey
|
Large Language Models (LLMs) trained on extensive textual corpora have emerged as leading solutions for a broad array of Natu- ral Language Processing (NLP) tasks. De- spite their notable performance, these models are prone to certain limitations such as mis- understanding human instructions, generating potentially biased content, or factually incor- rect (hallucinated) information. Hence, align- ing LLMs with human expectations has be- come an active area of interest within the re- search community. This survey presents a comprehensive overview of these alignment technologies, including the following aspects. (1) Data collection: the methods for effec- tively collecting high-quality instructions for including the use of NLP LLM alignment, benchmarks, human annotations, and leverag- (2) Training methodolo- ing strong LLMs. gies: a detailed review of the prevailing train- ing methods employed for LLM alignment. Our exploration encompasses Supervised Fine- tuning, both Online and Ofï¬ ine human pref- erence training, along with parameter-efï¬ cient training mechanisms. (3) Model Evaluation: the methods for evaluating the effectiveness of these human-aligned LLMs, presenting a mul- tifaceted approach towards their assessment. In conclusion, we collate and distill our ï¬ nd- ings, shedding light on several promising fu- ture research avenues in the ï¬ eld. This sur- vey, therefore, serves as a valuable resource for anyone invested in understanding and ad- vancing the alignment of LLMs to better suit human-oriented tasks and expectations. An associated GitHub link collecting the latest papers is available at https://github.com/ GaryYufei/AlignLLMHumanSurvey.
|
2307.12966#0
|
2307.12966#2
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#2
|
Aligning Large Language Models with Human: A Survey
|
# Introduction facilitating the generation of coherent and ï¬ uent text in response to various inputs. Despite these strengths, foundational LLMs are not always adept at interpreting a wide range of instructions and can produce outputs that deviate from human expec- tations. Additionally, these models may produce biased content or invent (hallucinated) facts, which can limit their practical usefulness. Therefore, recent NLP research efforts focus on empowering LLMs to understand instructions and to align with human expectations. Early methods for training LLMs to follow instructions primar- ily use task instruction sets, which are compiled by combining manually crafted task instruction templates with instances from standard NLP tasks. However, such approaches often fall short of cap- turing the intricacies of practical user instructions, as these instructions tend to originate from artiï¬ - cial NLP tasks designed to test speciï¬ c aspects of machine capabilities. Real-world user instructions, on the other hand, are signiï¬ cantly more diverse and complex. As a result, OpenAI explored Super- vised Fine-Tuning (SFT) of LLMs using instruc- tions annotated by a diverse group of human users. Models developed through this process, such as InstructGPT (Ouyang et al., 2022) and ChatGPT 1, have demonstrated a marked improvement in under- standing human instructions and solving complex tasks. To further enhance alignment, Ouyang et al. (2022) incorporate the Reinforcement Learning from Human Feedback (RLHF) approach, which involves learning from human preferences through a reward model trained with human-rated outputs. There are challenges in alignment processes and the subsequent evaluation: (a) Collecting high- quality data for both SFT and RLHF stages can be costly and time-consuming. (b) The training strategies need to be optimized as SFT training is resource-consuming, and reinforcement learning in RLHF often lacks stability. (c) Evaluating LLMs Foundational Large Language Models (LLMs) such as GPT-3 are pre-trained on a vast textual cor- pus with objectives to predict subsequent tokens. This process equips LLMs with world knowledge, 1https://chat.openai.com/
|
2307.12966#1
|
2307.12966#3
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#3
|
Aligning Large Language Models with Human: A Survey
|
# s M L L r o F t n e m n g i l # A Instructions From Human (§2.1) NLP Benchmarks PromptSource (Bach et al., 2022), SuperNaturalInstruction (Wang et al., 2022b), FLAN (Longpre et al., 2023), Unnatural Instructions (Honovich et al., 2022), OIG (Nguyen et al., 2023) Hand-crafted Instructions Dolly-v2 (Conover et al., 2023), OpenAssistant (Kopf et al., 2023), COIG (Zhang et al., 2023a), ShareGPT (Chiang et al., 2023), Improving Input Quality Self-Instruct Data Instruction From Strong LLMs (§2.2) Multi-Turn Instructions Improving Output Quality Baize (Xu et al., 2023c), CAMEL (Li et al., 2023a), SelFee (Ye et al., 2023a), UltraLLaMA (Ding et al., 2023), Vicuna (Chiang et al., 2023) Multilingual Instructions Phoenix (Chen et al., 2023e), BayLing (Zhang et al., 2023c), BactrianX (Li et al., 2023b) Instruction Data Management (§2.3) Instruction Implications Instruction Quantity Tà LU (Wang et al., 2023d), FLACUNA (Ghosal et al., 2023), Data-Constrained LM (Muennighoff et al., 2023), BELLE (Ji et al., 2023) IFS (AlShikh et al., 2023), LIMA (Zhou et al., 2023), Instruction Mining (Cao et al., 2023), Alpagasus (Chen et al., 2023b) Online Human Alignment (§3.1) RLHF (Ouyang et al., 2022), RAFT (Dong et al., 2023) Training Ofï¬ ine Human Alignment (§3.2) Parameter-Efï¬
|
2307.12966#2
|
2307.12966#4
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#4
|
Aligning Large Language Models with Human: A Survey
|
cient Training (§3.3) Rank-based Training DPO (Rafailov et al., 2023), PRO (Song et al., 2023), RRHF (Yuan et al., 2023) SLiC (Zhao et al., 2023) Language-based Training Conditional Behavior Cloning (Wang et al., 2023a), CoH (Liu et al., 2023b), Second Thoughts (Liu et al., 2022b), Stable Alignment (Liu et al., 2023d), SelFee (Ye et al., 2023a) Preï¬ x Tuning (Li and Liang, 2021), Prompt Tuning (Lester et al., 2021), LoRA (Hu et al., 2022), AdaLoRA (Zhang et al., 2023b), QLoRA (Dettmers et al., 2023), Uniï¬
|
2307.12966#3
|
2307.12966#5
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#5
|
Aligning Large Language Models with Human: A Survey
|
ed Prompt (Chen et al., 2023a) General Knowledge MMLU (Hendrycks et al., 2021), C-MMLU (Li et al., 2023c), C-Eval (Huang et al., 2023),Kola (Yu et al., 2023a), M3KE (Liu et al., 2023a), AGIEval (Zhong et al., 2023) Closed-set Benchmarks Reasoning GSM8K (Cobbe et al., 2021), Maths (Hendrycks et al., 2021), CSQA (Talmor et al., 2019), StrategyQA (Geva et al., 2021) Coin Flip (Wei et al., 2022b),BBH (Suzgun et al., 2022) Evaluation Benchmarks (§4.1) Coding MBPP (Austin et al., 2021), DS-1000 (Lai et al., 2022) HumanEval (Chen et al., 2021), HumanEval+ (Liu et al., 2023c), Evaluation Open-set Benchmarks Vicuna-80 (Chiang et al., 2023), Open-Assistant-953 (Kopf et al., 2023) User-Instructions-252 (Wang et al., 2022a), FLASK (Ye et al., 2023b) MT-Bench (Zheng et al., 2023), AlpacaEval (Dubois et al., 2023) Human-based Evaluation Ordinal Classiï¬
|
2307.12966#4
|
2307.12966#6
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#6
|
Aligning Large Language Models with Human: A Survey
|
cation (Wang et al., 2022a; Wu et al., 2023) Pairwise Comparison (Taori et al., 2023), Elo (Zheng et al., 2023) Evaluation Paradigms (§4.2) Reference-Free Evaluation GPTEval (Liu et al., 2023e), GPTScore (Fu et al., 2023), Explicit Score (Chen et al., 2023d), LM Examiner (Chiang and Lee, 2023) FactScore (Min et al., 2023), Alignscore (Zha et al., 2023) LLMs-based Evaluation LLMs Bias in Evaluation Positional Bias (Wang et al., 2023b), Multi-Elo (Wu and Aji, 2023) LLM-as-a-Judge (Zheng et al., 2023) LLMs for Evaluation PandaLM (Wang et al., 2023c) CoT (Wei et al., 2022b), Orca (Mukherjee et al., 2023), Lion (Jiang et al., 2023) Self-Alignment (Sun et al., 2023b), Phoenix (Chen et al., 2023e), Expert Prompting (Xu et al., 2023a) Figure 1: Taxonomy of research in aligning Large Language Models (LLMs) with human that consists of alignment data, training strategy, and evaluation methods. comprehensively is challenging, as limited NLP benchmarks may not fully reveal the multifaceted capabilities of LLMs. To address these limitations, extensive research efforts have been devoted. In Figure 1, we pro- vide a summary of these multi-aspect approaches. For aspect (a), the focus is on effectively collect- ing large-scale, high-quality data for LLM align- ment training. Researchers propose leveraging the power of existing NLP benchmarks, human anno- tators, and state-of-the-art LLMs (e.g., ChatGPT and GPT-4) to generate training instructions.
|
2307.12966#5
|
2307.12966#7
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#7
|
Aligning Large Language Models with Human: A Survey
|
To tackle aspect (b), solutions involve optimizing the training methods for better efï¬ ciency and stability in incorporating human preferences. Parameter- efï¬ cient training methods have been proposed to reduce computation burden and improve efï¬ ciency in LLM alignment. Additionally, some researchers consider human preference as ranking-based train- ing signals or replace scalar rewards with language- based feedback to enhance training stability and performance. Regarding aspect (c), various human- centric LLM evaluation benchmarks and automatic evaluation protocols (e.g., LLMs for evaluation) have been proposed to obtain a comprehensive eval- uation of aligned LLMs. In this survey, we aim to provide a comprehen- sive overview of alignment technologies for large language models. In Section 2, we summarize vari- ous methods in effective high-quality data collec- tion. Section 3 focuses on popular training methods to incorporate human preference data into LLMs. The evaluation benchmarks and automatic proto- cols for instruction-following LLMs are discussed in Section 4. By collating and distilling our ï¬ nd- ings, we shed light on several promising future research avenues in Section 5. Through this survey, we aim to provide an overview of the current state of LLM alignment, enabling researchers and prac- titioners to navigate the complexities of aligning LLMs with human values and expectations. # 2 Alignment Data Collection Aligning LLMs with human expectations neces- sitates the collection of high-quality training data that authentically reï¬ ects human needs and expec- tations. For the purposes of this survey, we con- ceptualize an instruction as Ik = (xk, yk), where xk denotes the instruction input and yk denotes the corresponding response. This data can be de- rived from an array of sources, encompassing both human-generated instructions and those generated by strong LLMs. In this section, we summarize these methods of instruction generation and effec- tive strategies for constructing a composite of di- verse training instructions. # Instructions from Human Human-provided instructions mainly originate from two main sources: pre-existing human- annotated NLP benchmarks and meticulously hand- crafted instructions. 2.1.1 NLP Benchmarks An intuitive starting point for data collection in- volves adapting existing NLP benchmarks into natural language instructions.
|
2307.12966#6
|
2307.12966#8
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#8
|
Aligning Large Language Models with Human: A Survey
|
For instance, Fig- ure 2 offers an example drawn from the Natural Language Inference task. Works such as Prompt- Source (Bach et al., 2022), FLAN (Wei et al., 2022a; Longpre et al., 2023), and SuperNaturalIn- struction (Wang et al., 2022b; Mishra et al., 2022) are at the forefront of this approach. These bench- marks represent a substantial array of diverse and heterogeneous NLP tasks, such as dialogue, rea- soning tasks and coding tasks, uniï¬
|
2307.12966#7
|
2307.12966#9
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#9
|
Aligning Large Language Models with Human: A Survey
|
ed under the # Template with placeholders Question: Given {{Premise}}, does this imply that "{{Hypothesis}}" ? Yes, No or Maybe? # Answer: {{Labe}} Task Instances From NLP Benchmarks Premise: This church choir sings to the masses as they sing joyous songs from the book at a church. Hypothesis: The church has cracks in the ceiling. Label: Maybe Figure 2: An Example of Instruction from a Natural Language Inference (NLI) benchmark. framework of language instructions. In each NLP benchmark, they engage annotators to craft several natural language templates that smoothly integrate all input data into a sequential text. The objective is to enhance LLMsâ capability for multi-task learn- ing across training tasks and foster generalization for unseen tasks. OIG (Nguyen et al., 2023) also combines instructions from FLAN-like NLP bench- marks with other types of open-ended instructions, such as how-to, maths and coding instructions. Concurrently, Honovich et al. (2022) put forth the concept of Unnatural Instructions, utilizing LLMs to generate new templates or instances bearing re- semblance to the original instructions but with no- table variances. Interestingly, the authors discov- ered that text-davinci-002 outperforms GPT-3 in responding to these generated instructions, given that GPT-3 often devolved into repetitive or tangen- tial outputs after providing the correct answer. This model of instruction creation is highly scalable and can yield millions of instructions effectively. Fur- ther, Wang et al. (2023d) demonstrated that FLAN- style instructions considerably enhanced the rea- soning capabilities of aligned LLMs. # 2.1.2 Hand-crafted Instructions Constructing instructions from NLP benchmarks could be effective and painless. However, as many NLP datasets focus on a small and speciï¬ c skill set, which means the resultant instructions are also relatively narrow in scope. Consequently, they may fall short in catering to the complex needs of real- world applications, such as engaging in dynamic human conversation. To combat the above issues, it is possible to construct instructions via intentional manual an- notations. How to effectively design a human- in-the-loop annotation framework becomes the key issue.
|
2307.12966#8
|
2307.12966#10
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#10
|
Aligning Large Language Models with Human: A Survey
|
The Databricks company collects a 15k crowd-sourcing instruction dataset databricks- dolly-15k (Conover et al., 2023) from its employees. Those people are instructed to create prompt / re- sponse pairs in each of eight different instruction categories, including the seven outlined in Ouyang et al. (2022), as well as an open-ended free-form category. Importantly, they are explicitly instructed not to use external web information, as well as outputs from generative AI systems. Kopf et al. (2023) construct the OpenAssistant corpus with over 10,000 dialogues using more than 13,000 in- ternational annotators. The annotation process in- cludes a) writing initial prompts for dialogue; b) replying as an assistant or user; c) ranking dia- logue quality to explicitly provide human prefer- ences. As a result, this corpus can be used for SFT and human preference alignment training for LLMs. Zhang et al. (2023a) construct high-quality Chinese instructions from existing English instruc- tion datasets.
|
2307.12966#9
|
2307.12966#11
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#11
|
Aligning Large Language Models with Human: A Survey
|
They ï¬ rst translate the English in- structions into Chinese, then verify whether these translations are usable. Finally, they hire annota- tors to correct and re-organize the instructions into the task description, input, output format in the selected corpus. ShareGPT 2, which is collected by Chiang et al. (2023), is an interesting explo- ration for crowd-sourcing human-written instruc- tions. It is a website that encourages users to upload and share their interesting ChatGPT/GPT4 conver- sations. Such a mechanism can effectively col- lect a large number of diverse and human-written instructions that likely trigger high-quality Chat- GPT/GPT4 responses. Popular online QA websites, such as Stack Overï¬ ow 3, Quora 4 and Zhihu 5, and large user-generated content databases, such as Wikipedia 6, are all reliable sources to provide high-quality human-written prompts for this pur- pose.Both Ding et al. (2023) and Xu et al. (2023c) propose to use these resources as the seed instruc- tions to prompt GPT-3.5 to generate high-quality synthetic multi-turn dialogues.
|
2307.12966#10
|
2307.12966#12
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#12
|
Aligning Large Language Models with Human: A Survey
|
# Instructions From Strong LLMs With the emergence of strong closed-source LLMs (e.g., ChatGPT/GPT4), it is also feasible to auto- mate the collection process to obtain various types of synthetic instructions (e.g., single-turn, multi- 2https://sharegpt.com/ 3https://stackoverflow.com/ 4https://www.quora.com/ 5https://www.zhihu.com/ 6https://en.wikipedia.org/ turn, and multilingual instructions) by providing appropriate prompts to these LLMs. The main challenge is how to effectively prompt LLMs to generate diverse and high-quality instructions. | Seed Initial Instructions | Instructions Pool Improving Input Quality iS) In-Context Update Query New Instruction Inputs Improving Output Quality New Instruction Outputs Response Prompt Figure 3: The overview of self-instruction. Starting from instructions in the pool, self-instruction leverages LLMs to produce new instructions via in-context learn- ing.
|
2307.12966#11
|
2307.12966#13
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#13
|
Aligning Large Language Models with Human: A Survey
|
After ï¬ ltering, LLMs are then prompted to respond to the remaining instructions. The full instructions are then added to the pool. Research efforts have been de- voted to 1) Improving instruction input quality, and 2) Improving instruction output quality. 2.2.1 Self-Instruction Self-Instruct (Wang et al., 2022a) were among the pioneers to automate the instruction collection pro- cess. It employed the in-context learning capability of ChatGPT to generate large-scale instructions from a pre-deï¬ ned set of human-annotated instruc- tions covering diverse topics and task types, as il- lustrated in Figure 3. The automatically generated instructions are followed by a quality control ï¬ l- tering process, and this iterative process continues until the desired data volume has been achieved. Interestingly, the researchers discovered that GPT- 3 (Brown et al., 2020), ï¬ ne-tuned with these in- structions, performed better than models ï¬ ne-tuned using instructions derived from NLP benchmarks SuperNI benchmark (Wang et al., 2022b) and User- Oriented Instructions, as discussed in Section 2.1). Several follow-up attempts, such as Aplaca (Taori et al., 2023) and its variants (Cui et al., 2023a) fol- low this Self-Instruct framework. More subsequent research efforts w.r.t. enhancing instruction diver- sity, quality, and complexity will be elaborated as follows. Improving Input Quality One limitation of the synthetic instructions from strong LLMs often suf- fer from diversity issues.
|
2307.12966#12
|
2307.12966#14
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#14
|
Aligning Large Language Models with Human: A Survey
|
For example, Jentzsch and Kersting (2023) ï¬ nd that when prompting to generate jokes, ChatGPT only produces 25 unique joke patterns in thousands of samples. To im- prove the instruction input diversity, Wang et al. (2022a) propose different input and output gen- eration strategies for different types of instruc- tions. They ï¬ rst prompt ChatGPT to classify gen- erated instruction into classiï¬ cation tasks or non- classiï¬ cation tasks. Then, they deploy output-ï¬ rst and input-ï¬ rst strategies for classiï¬ cation tasks and non-classiï¬ cation tasks, respectively. Others propose to add various external information into the input prompts to enhance diversity and factual- ity, including Wikipedia Category Keywords (Wu et al., 2023), user-generated questions on the Inter- net (e.g., Quora, StackOverï¬ ow) (Xu et al., 2023c; Anand et al., 2023) and instructions from the Su- perNaturalInstruction benchmark (Honovich et al., 2022). Yu et al. (2023b) also shows that explic- itly adding meta-information (e.g., length, topics, style) into the data generation prompts can effec- tively remove the bias in the generated synthetic data and improve the diversity of those synthetic data. Furthermore, Xu et al. (2023b) propose a novel Evol-Instruct framework to obtain complex and difï¬ cult instructions gradually. Instead of using existing instructions to prompt LLMs to produce new instructions via in-context learning, in Evol- Instruct, there are ï¬ ve different manually-designed prompts to explicitly instruct LLMs to rewrite the existing simple instructions into complex ones us- ing in-depth methods (i.e., including more infor- mation on particular topics) or in-Breadth methods (i.e, improving topics/information coverage). The resulting WizardLM model is ranked top in the MT- Bench (Zheng et al., 2023) and AlpacaEval (Dubois et al., 2023).
|
2307.12966#13
|
2307.12966#15
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#15
|
Aligning Large Language Models with Human: A Survey
|
Luo et al. (2023) further expand this idea to produce complex code and programming instructions from the simple ones and propose the WizardCoder model, which outperforms several strong commercial LLMs, e.g., Anthropicâ s Claude and Googleâ s Bard. Gunasekar et al. (2023) propose to generate textbook-like instructions prompted with sufï¬ cient background knowledge to promote reasoning and basic algorithmic skills of LLMs. They ï¬ nd that the resulting 1.3B LLMs phi-1 suc- cessfully outperform various much larger LLMs, showing the importance of data quality. Improving Output Quality Aside from the pro- vision of high-quality instruction input, a critical requisite is to skillfully prompt LLMs to yield high- quality responses. The conventional method of enhancing response quality entails appending LLM prompts with additional conditions, encompassing the following facets. (1) Reasoning-Provoking Conditions: Wei et al. (2022b) proposed the Chain-of-Thought (CoT) reasoning approach, which includes precon- ditions in the LLM prompts and generation the intermediate reasoning processes for complex prob- lems, thereby assisting LLMs in problem-solving. Inspired by CoT, Mukherjee et al. (2023) devel- oped the Orca model, which learns not only the superï¬ cial response text from LLMs, but also cap- tures complex reasoning process signals. Speciï¬ - cally, they guided LLMs to respond to reasoning- intensive FLAN instructions with a series of pre- deï¬ ned system prompts (e.g., â think step-by-step and justify your responseâ ), spurring LLMs (e.g., GPT4) to disclose their reasoning process infor- mation.
|
2307.12966#14
|
2307.12966#16
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#16
|
Aligning Large Language Models with Human: A Survey
|
Thanks to these advancements, the Orca model signiï¬ cantly outperformed several powerful open-sourced LLMs. (2) Hand-crafted Guiding Principles: Sun et al. (2023b) introduced self-alignment framework that incorporates 16 manually devised principle rules into input prompts, thereby steering LLMs towards generating useful, ethical, and reliable re- sponses. To augment the impact of these rules, they employed the Chain-of-Thoughts (CoT) technol- ogy (Wei et al., 2022b), elucidating ï¬ ve examples to coach LLMs in discerning which rules to imple- ment prior to generating actual response contents. Chen et al. (2023e) devised a method to generate a set of role proï¬ les using a blend of ChatGPT and manual ef- forts. They created seed instructions for each role proï¬ le and applied self-instruction to the combi- nation of role proï¬ les and instructions to obtain nuanced responses from LLMs. Xu et al. (2023a) proposed a two-stage instruction response frame- work in which an expert proï¬ le is initially gen- erated based on the instructions to be answered, followed by using both the expert proï¬ le and ac- tual instructions to prompt LLMs for high-quality responses. Jiang et al. (2023) proposed monitoring the quality of instruction response based on external LLM-based evaluations.
|
2307.12966#15
|
2307.12966#17
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#17
|
Aligning Large Language Models with Human: A Survey
|
They ï¬ rst ï¬ ne-tune foundational LLMs with instruction data to obtain â student LLMsâ . Then, for each of training instruction, they gather responses from both teacher LLMs (e.g., ChatGPT) and student LLMs and prompted LLMs to conduct pairwise evaluation on the quality of both responses. Instructions are retained only when the student LLMsâ response falls short of that from the teacher LLMs. # 2.2.2 Multi-turn Instructions In previous sections, we mainly focus on collecting synthetic single-turn instructions. However, LLMs well aligned with human should be capable to in- teract with users in a dialogue-based setting. To achieve this goal, some research efforts attempt to collect synthetic multi-turn instructions from strong LLMs. When aligning LLaMA with human, Vicuna (Chiang et al., 2023) leverage instructions from ShareGPT which is website hosting interest- ing human-LLMs joint conversations. However, ShareGPT requires large volumes of users to up- load their conversations. Xu et al. (2023c) propose a novel Self-Chatting framework where questions from popular QA websites are used as the starting topics, then Chat-3.5 is prompted to chat with it- self about this question in a four-turn dialogue. Li et al. (2023a) propose CAMEL, a â role-playingâ framework where a human annotators ï¬ rst provide a topic, then LLMs are separately prompted to be â AI Usersâ and â AI Assistantsâ to discuss about this topic. Ji et al. (2023) take a step further and prompt LLMs to ï¬ rst determine the conversation topic and then ask LLMs to chat with themselves to produce dialogue corpus. Ye et al. (2023a) propose a novel revision-based multi-turn dialogue corpus.
|
2307.12966#16
|
2307.12966#18
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#18
|
Aligning Large Language Models with Human: A Survey
|
Speciï¬ - cally, after instructions and initial responses, they further prompt LLMs to generate feedback and the revised version of responses if necessary. They use this dataset to train the SelFee model and show that SelFee can effectively improve its own answers when prompted to do so without any external guid- ance. The UltraLLaMA model (Ding et al., 2023) leverages a wide range of real-world information, including (a) real-world knowledge from LLMs and Wikipedia; (b) various text creation tasks; (c) high-quality textual corpus, to produce initial ques- tions and instructions that guide LLMs to generate diverse and high-quality multi-turn dialogues. # 2.2.3 Multilingual Instructions The above-generated instructions or dialogues are mostly based on English. To align LLMs with hu- man who speak other languages, it is urgent and essential to expand the existing English resources into Multilingual ones. One straightforward idea is to translate instruction inputs and outputs into the target languages. Chen et al. (2023e) pro- pose two translation strategies: (a) Post-answering which ï¬ rst translates the instruction inputs into the target language and then prompts strong LLMs to answer it. This could potentially preserve the speciï¬ c culture patterns embedded in the target lan- guages, but the output quality may be low as exist- ing strong LLMs are often English-dominated; (b) Post-translating which ï¬ rst prompts strong LLMs to respond the instructions in English, then trans- late both inputs and outputs. This approach could obtain high-quality output text, but lost the spe- ciï¬ c culture information. Li et al. (2023b) follow the Post-answering strategy to construct instruc- tion data for 52 popular languages using Google- Translate, then use these data to ï¬ ne-tune LLaMA using the LoRA technology. An alternative solu- tion is to mix several langauges in a multi-turn dialogue. BayLing (Zhang et al., 2023c) introduces a set of multi-turn interactive translation instruc- tions to simultaneously improve multilingual and instruction-following ability for LLMs.
|
2307.12966#17
|
2307.12966#19
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#19
|
Aligning Large Language Models with Human: A Survey
|
Speciï¬ - cally, each multi-turn instruction is essentially a translation task where users ï¬ rst ask LLMs to trans- late a sentence to another language, then the users gradually add additional requirements (e.g., could you only use 10 words?). This process naturally connects different languages as well as human pref- erences with LLMs. We also summarize how to effectively adapt English-oriented LLMs to other languages in Appendix A.1. # Instruction Data Management As discussed above, there are extensive approaches focusing on generating high-quality instructions from different sources. Naturally, it becomes crit- ical to effectively manage all of these instruction data in the LLMs alignment. Instruction Implications Several studies focus on the implications of instruction data. Ji et al. (2023) demonstrate that an increment in the total count of training instructions can be advantageous for standard NLP tasks (e.g., information extrac- tion, classiï¬ cation, Closed QA, summarization). Yet, it bears negligible inï¬ uence on complex rea- soning tasks such as Math, Code, CoT, and Brain- storming. Intriguingly, Muennighoff et al. (2023) discover that adding approximately 50% of pro- gramming instructions not only leaves unaffected the general conversational performance but also enhances the reasoning prowess of LLMs. In par- allel, Ghosal et al. (2023) observe that integrating FLAN-style instructions with synthetic instructions from ChatGPT/GPT-4 effectively enhances LLMsâ reasoning and problem-solving capacity. Wang et al. (2023d) conduct a comprehensive analysis of the impacts of various instructions de- rived from different sources on factual knowledge, reasoning, coding, multilingual, and open-ended scenarios. They also reveal that instructions per- taining to CoT and Coding are vital for augmenting the reasoning capability of LLMs. Additionally, they ascertain that different instructions can affect different LLM capabilities. Therefore, a composite of all instruction types empowers the correspond- ing LLMs to reach their better overall performance, hinting at the need for more advanced instruction collection techniques and technologies. Instruction Quantity Another critical question in instruction data management is the optimal quan- tity of instruction data required for effective LLM alignment.
|
2307.12966#18
|
2307.12966#20
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#20
|
Aligning Large Language Models with Human: A Survey
|
AlShikh et al. (2023) address this ques- tion by introducing a novel early-stopping criterion known as IFS. The premise of IFS rests on the ob- servation that, given an input textual preï¬ x, founda- tional LLMs typically predict ensuing tokens and generate "continuation-like" outputs, while fully instruction-tuned LLMs interpret the input preï¬ x as questions, thereby generating "answer-like" out- puts. IFS is quantiï¬ ed as the proportion of "answer- like" outputs within all its outputs given the instruc- tions. The researchers train an external classiï¬ er to discriminate between "continuation-like" and "answer-like" outputs, concluding that LLaMA ne- cessitates approximately 8K instructions to achieve a high IFS score. More instructions could poten- tially induce a semantic shift in the foundational LLMs. Zhou et al. (2023) similarly discern that merely 6K high-quality instructions sufï¬ ce to align with human preferences. Motivated by these ï¬ nd- ings, researchers are investigating high-quality in- struction selection. Cao et al. (2023) aim to iden- tify predictive features of high-quality instructions. Initially, they extract representative features from the instruction dataset, then utilize these instruc- tions to ï¬
|
2307.12966#19
|
2307.12966#21
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#21
|
Aligning Large Language Models with Human: A Survey
|
ne-tune LLMs. The feature importance is based on the modelâ s performance. Their ex- periments demonstrate the better performance of LLMs trained on the resultant instructions. Differ- ently, Chen et al. (2023b) propose using ChatGPT to directly assess the quality of instructions by as- signing scores. They report that the LLM trained on the top 9K instructions notably outperforms those trained on the complete set of 52K Alpaca instructions. # 3 Alignment Training After collecting instructions from various sources, we then consider using these data to ï¬ ne-tune exist- ing foundational LLMs to align with human. The native solution is Supervised Fine-Tuning (SFT).
|
2307.12966#20
|
2307.12966#22
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#22
|
Aligning Large Language Models with Human: A Survey
|
Speciï¬ cally, given instruction input x, SFT calcu- lates the cross-entropy loss over the ground-truth response y as follows: Lyp=- Se log Prim (yir\@. Yir,<t) (1) t Essentially, SFT helps LLMs to understand the se- mantic meaning of prompts and make meaningful responses. The main limitation of SFT is that it only teaches LLMs about the best responses and cannot provide ï¬
|
2307.12966#21
|
2307.12966#23
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#23
|
Aligning Large Language Models with Human: A Survey
|
ne-grained comparisons to sub- optimal ones. However, it is worth noting that SFT objective or SFT model parameters has also been integrated into many human preference training ob- jective to regularize and stabilize the training pro- cess of LLMs. We summarize the research efforts built on top of SFT into: Online human preference training, Ofï¬ ine human preference training and Parameter-effective ï¬ ne-tuning solutions. # 3.1 Online Human Preference Training Reinforcement learning from Human Feedback (RLHF) (Ouyang et al., 2022) is designed to learn the human preference signals from external reward models under the PPO framework.
|
2307.12966#22
|
2307.12966#24
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#24
|
Aligning Large Language Models with Human: A Survey
|
Speciï¬ cally, RLHF consists of three main stages: â ¢ Step 1: Collecting a high-quality instruction set and conducting SFT of pre-trained LLMs. â ¢ Step 2: Collecting manually ranked compari- son response pairs and train a reward model IR to justify the quality of generated responses. â ¢ Step 3: Optimizing the SFT model (policy) under the PPO reinforcement learning frame- work with reward calculated by IR. In Step 3, to mitigate over-optimization is- sues, Ouyang et al. (2022) add a KL-divergence regularization between the current model weight and the SFT model weight obtained in Step 1. How- ever, despite being effective in learning human pref- erences, PPO training is difï¬ cult in implementation and stable training. Therefore, Dong et al. (2023) try to remove the PPO training in the above process and propose a novel Reward rAnked FineTuning (RAFT) method, which uses an existing reward model to select the best set of training samples based on the model outputs.
|
2307.12966#23
|
2307.12966#25
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#25
|
Aligning Large Language Models with Human: A Survey
|
Speciï¬ cally, RAFT ï¬ rst samples a large batch of instructions, then uses the current LLMs to respond to these instructions. These data are then ranked by the reward model and only top 1 k instances are applied for SFT. RAFT can also be used in ofï¬ ine human preference learning where the global instruction set is continually up- dated with the top-ranked instructions in each batch. This contiguously updates the global instruction set to improve training data quality at each step.
|
2307.12966#24
|
2307.12966#26
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#26
|
Aligning Large Language Models with Human: A Survey
|
# 3.2 Ofï¬ ine Human Preference Training Although the above online algorithms have been shown effective in learning human preference, im- plementing these algorithms could be non-trivial because its training procedure requires interaction between policy, behavior policy, reward, and value model, which requires many hyper-parameters to be tuned to achieve better stability and performance. To avoid this issue, researchers also explore learn- ing human preferences in an ofï¬ ine fashion. 3.2.1 Ranking-based Approach As human preferences are often expressed as a rank- ing result over a set of responses, some research efforts directly incorporate the ranking informa- tion into the LLMs ï¬ ne-tuning stage. Rafailov et al. (2023) propose Direct Preference Optimiza- tion (DPO), which implicitly optimizes the same objective as existing RLHF algorithms (i.e., reward function with a KL-divergence term) discussed above.
|
2307.12966#25
|
2307.12966#27
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#27
|
Aligning Large Language Models with Human: A Survey
|
Speciï¬ cally, the DPO training objective can be written as: LDPO = log Ï Î² log( Ï Î¸(yw | x) Ï SFT(yw | x) · Ï SFT(yl | x) Ï Î¸(yl | x) ) (2) where (x, yw, yl) is one instruction and two of the corresponding outputs with yw ranked higher than yl. Similarly, Song et al. (2023) propose Prefer- ence Ranking Optimization (PRO) method, an ex- tended version of reward model training objective proposed in Ziegler et al. (2019), to further ï¬ ne- tune LLMs to align with human preference. Given instruction x and a set of responses with human preference order y! > y? > --- > yâ , the objec- tive can be defined as follows: nâ 1 exp ( (mo(y* | «)) nâ 1 28 = exp ( (mo(y* | «)) Sree exp (m9(yâ | 2)) (3) LpRo PRO also adds SFT training objective for the regu- larization purpose. Instead of adapting the reward training objective, Zhao et al. (2023) take the ï¬ rst step to calibrate the sequence likelihood using vari- ous ranking functions, including rank loss, margin loss, list rank loss (Liu et al., 2022c) and expected rank loss (Edunov et al., 2018). In addition, they also explore to use SFT training objective and KL- divergence as the regularization term. The experi- ment results on various text generation tasks show that the rank loss with the KL-divergence term per- forms the best. However, this paper only uses the BERTScore (Zhang* et al., 2020) between each candidate output and the ground-truth reference to simulate human preferences and they only conduct experiment on small pre-trained language models (i.e., no larger than 2B). Yuan et al. (2023) pro- pose RRHF, which further optimizes LLaMA-7B to align with human preferences using a similar framework described above. RRHF is based on the list rank loss, but removes the margin terms based on the empirical results.
|
2307.12966#26
|
2307.12966#28
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#28
|
Aligning Large Language Models with Human: A Survey
|
In addition, differ- ent from Liu et al. (2022c), RRHF ï¬ nds that the SFT training objective is more effective and efï¬ - cient than KL-divergence in preventing LLMs from over-ï¬ tting. These results show that different rank- ing strategies should be adapted for LLMs with different size. { Write me a 3-day travelling plan to HK i LLMs Response Response H A B Quality Feedback: A> B 1 Yi { Write me a 3-day travelling plan to HK. Good A. Bad: B. }- - Figure 4: The overview of the Chain of Hindsigt (CoH) method. Responses with different quality are associ- ated with different preï¬ x. The CoH training loss is only applied on model output tokens (highlighted by red). 3.2.2 Language-based Approach As reinforcement learning algorithms are hard to optimize and LLMs have strong text understand- ing ability, some works propose to directly use natural language to inject human preference via SFT. Wang et al. (2023a) introduce the concept of â conditional behavior cloningâ from ofï¬
|
2307.12966#27
|
2307.12966#29
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#29
|
Aligning Large Language Models with Human: A Survey
|
ine rein- forcement learning literature (Nguyen et al., 2022) to train LLMs to distinguish high-quality and low- quality instruction responses. Speciï¬ cally, they design different language-based preï¬ xes for differ- ent quality responses (e.g., high-quality response with â Assistant GPT4:â and low-quality response with â Assistant GPT3:â ). This approach can effec- tively leverage both low- and high-quality training data to align LLMs with humans. Chain of Hind- sight (CoH) (Liu et al., 2023b), on the other hand, directly incorporates human preference as a pair of parallel responses discriminated as low-quality or high-quality using natural language preï¬
|
2307.12966#28
|
2307.12966#30
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#30
|
Aligning Large Language Models with Human: A Survey
|
xes. As shown in Figure 4, after assigning human feedback to each model output, CoH concatenates the input instructions, LLMs outputs, and the correspond- ing human feedback together as the input to LLMs. Note that CoH only applies the ï¬ ne-tuning loss to the actual model outputs, rather than the human feedback sequence and the instructions. During in- ference, CoH directly puts position feedback (e.g., good) after the input instructions to encourage the LLMs to produce high-quality outputs. It is worth- noting that, similar to Liu et al. (2022a); Ouyang et al. (2022), CoH also incorporates SFT objectives and random words masking to prevent LLMs from over-ï¬ tting. Alternative approach is to explicitly incorpo- rate revision-based instructions into LLMs train- ing. Some preliminary studies have shown that many existing state-of-the-art LLMs have the ca- pability to improve the quality of their responses when explicitly prompting them to do so (Chen et al., 2023c). Motivated by these ï¬ ndings, Liu et al. (2022b) recommend training LMs to produce edit operations between source (i.e., low-quality responses) and target (i.e., high-quality responses) sequences, which are subsequently integrated into a dynamic programming framework. Liu et al. (2023d) propose a novel type of instruction called realignment, designed to revise responses based on previously generated low-quality feedback and instructions. This compiled data is employed to instruct LLMs to self-correct when they generate bad responses. Similarly, Ye et al. (2023a) accumu- late a multi-turn dialogue corpus utilizing this self- correction mechanism built with the ChatGPT mod- els. Each dialogue starts with standard instructions, such as those from the Stanford Alpaca dataset. After ChatGPT has responded to the initial instruc- tions, further revisions are explicitly requested un- til ChatGPT elects to terminate. They found that LLMs trained using these dialogues demonstrated an effective capacity to elevate the quality of their own responses. # 3.3 Parameter-Effective Training Directly ï¬ ne-tuning all parameters in large lan- guage models (LLMs) would theoretically enable these models to adhere to provided instructions.
|
2307.12966#29
|
2307.12966#31
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#31
|
Aligning Large Language Models with Human: A Survey
|
However, this approach demands not only substan- tial computational resources, such as vast GPU memory but also extensive datasets for instruc- tion training. In an effort to mitigate both com- putational and data requirements for constructing instruction-following LLMs, one potential route is the implementation of Parameter-Effective Fine- tuning strategies. Speciï¬ cally, these methods froze the major part of LLM parameters and only train a limited set of additional parameters. Supplementary Parameters Building upon this strategy, preï¬ x tuning (Li and Liang, 2021) and prompt tuning (Lester et al., 2021) are inspired by the successful application of textual prompts in pre-trained language models (Brown et al., 2020). These methods either prepend trainable tokens to the input layer or each hidden layer, leaving the pa- rameters of LLMs frozen during ï¬ ne-tuning. Sub- sequently, He et al. (2022); Chen et al. (2023a) con- solidated these strategies into uniï¬ ed frameworks, fostering more effective solutions for parameter- efï¬ cient ï¬ ne-tuning. Shadow Parameters While the above method- ologies introduce supplementary parameters to LLMs, the following methods focus on training the weight representing model parameter variance without modifying the number of total model pa- rameters during inference. For instance, Low-Rank Adaptation (LoRA) (Hu et al., 2022) suggests the addition of pairs of rank-decomposition trainable weight matrices (i.e., update matrices) to the exist- ing weights, which are kept frozen. For example, given a neural layer h = W0x, LoRA modiï¬
|
2307.12966#30
|
2307.12966#32
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#32
|
Aligning Large Language Models with Human: A Survey
|
es the forward pass as follows: h=Wort+ BAr (4) where Wo ⠬ R¢**, B ⠬ R®*", A ⠬ R⠢⠢*, with the rank r < min(d,k). LoRA only updates the parameters of A and B during training. Despite being effective, LoRA equally allocates parameter budgets over the whole LLMs, ignoring the varying importance of different weight parameters. Zhang et al. (2023b) propose AdaLoRA to combat this issue.
|
2307.12966#31
|
2307.12966#33
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#33
|
Aligning Large Language Models with Human: A Survey
|
Speciï¬ cally, AdaLoRA ï¬ rst calculates the parameter importance using the training gradient and then determines the r values for different pa- rameters matrix. Dettmers et al. (2023) propose QLoRA that further improves over LoRA by re- ducing memory usage, enabling a 65B LLM to be ï¬ ne-tuned using a single 48G GPU. Speciï¬ cally, QLoRA quantizes the transformer backbone model to 4-bit precision and uses paged optimizers to han- dle memory spikes. Trade-offs For Parameter-efï¬ cient Training There are some successful applications of parameter-efï¬ cient training technologies, including the Alpaca-LoRA project 7, which is based on the Hugging Faceâ s PEFT library (Mangrulkar et al., 2022) to train Alpaca using a single commercial GPU and Xu et al. (2023c), which apply LoRA to all linear layers in LLaMA to improve its adaption capabilities. However, such an effective training ap- proach could also result in under-ï¬
|
2307.12966#32
|
2307.12966#34
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#34
|
Aligning Large Language Models with Human: A Survey
|
tting issues. Sun et al. (2023a) ï¬ nd that given the same set of train- ing instructions, LLMs with LoRA perform worse than the fully ï¬ ne-tuned ones. Furthermore, they also show that when using LoRA, it is preferable to use larger LLMs than larger training instruction datasets because the former solution uses less train- ing costs and achieves better performance than the later one. # 4 Alignment Evaluation After collecting instructions and training LLMs on these instructions, we ï¬ nally consider the eval- uation for alignment quality. In this section, we will discuss benchmarks used for evaluation in Sec- tion 4.1 and the evaluation protocols in Section 4.2. # 4.1 Evaluation Benchmarks There are various benchmarks to evaluate the aligned LLMs. In general, these benchmarks can be categorized into Closed-set Benchmarks and Open-set Benchmarks. The former type focuses on evaluating the skills and knowledge of aligned LLMs, while the latter type often concentrates on the open scenarios where there are no standardized answers.
|
2307.12966#33
|
2307.12966#35
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#35
|
Aligning Large Language Models with Human: A Survey
|
7https://github.com/tloen/alpaca-lora # 4.1.1 Closed-set Benchmarks The closed-set benchmarks mostly include test- ing instances whose possible answers are prede- ï¬ ned and limited to a ï¬ nite set (e.g., multiple choices). We discuss some of the most commonly used benchmarks below. We refer readers to Chang et al. (2023) for more comprehensive introduction of LLMsâ evaluation benchmarks. General Knowledge MMLU (Hendrycks et al., 2021) is an English-based benchmark to evaluate LLMs knowledge in zero-shot and few-shot set- tings. It comprehensively includes questions from the elementary level to an advanced professional level from 57 subjects including STEM, the human- ities, the social sciences, etc. The granularity and breadth of the subjects make MMLU ideal for iden- tifying LLMsâ
|
2307.12966#34
|
2307.12966#36
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#36
|
Aligning Large Language Models with Human: A Survey
|
blind spots. There are also several benchmarks attempting in evaluating the general knowledge in Chinese LLMs. C-MMLU (Li et al., 2023c), C-Eval (Huang et al., 2023), M3KE (Liu et al., 2023a) and AGIEval (Zhong et al., 2023) are all Chinese counterparts of MMLU that include diverse sets of questions from multiple subjects with different difï¬ culty levels from various Chi- nese standardized exams, including Chinese col- lege entrance exams, advanced maths competitions and law exams. The KoLA benchmark (Yu et al., 2023a) is proposed to evaluate the general real- world knowledge of LLMs. Reasoning Reasoning is a fundamental type of human intelligence that are crucial in solving com- Interestingly, research ï¬ nd that plicated tasks. LLMs have exhibit emergent behaviors, including the reasoning ability, when they are sufï¬ ciently large. Thus, there are several benchmarks in evalu- ating the ability of arithmetic, commonsense, and symbolic reasoning for LLMs. GSM8K (Cobbe et al., 2021) and Maths (Hendrycks et al., 2021) are designed to evaluate the arithmetic reasoning ability for LLMs. CSQA (Talmor et al., 2019) and StrategyQA (Geva et al., 2021) are proposed to evaluate the commonsense reasoning ability which requires the LLMs to use daily life commonsense to infer in novel situations. Wei et al. (2022b) pro- pose two novel tasks, Last Letter Concatenation and Coin Flip and measure the Symbolic reasoning ability that involves the manipulation of symbols according to formal rules. BBH (Suzgun et al., 2022), a challenging subset of BIG-Bench (bench authors, 2023), focus on evaluating a wide range of reasoning skills, such as Date Understanding, Word Sorting, and Causal Judgement. Coding HumanEval (Chen et al., 2021), Hu- manEval+ (Liu et al., 2023c), and MBPP (Austin et al., 2021) are extensively used benchmarks to evaluate the coding skills of LLMs.
|
2307.12966#35
|
2307.12966#37
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#37
|
Aligning Large Language Models with Human: A Survey
|
They encom- pass a vast collection of Python programming prob- lems and corresponding test cases to automatically verify the code generated by Code LLMs. The DS- 1000 benchmark (Lai et al., 2022) comprises 1,000 distinct data science workï¬ ows spanning seven li- braries. It assesses the performance of code genera- tions against test cases and supports two evaluation modes: completion and insertion. # 4.1.2 Open-ended Benchmarks In contrast to the closed-set benchmarks, the re- sponses to open-set benchmarks can be more ï¬ exi- ble and diverse, where aligned LLMs are usually given chatting questions or topics that do not have any ï¬ xed reference answers. Early attempts of open-ended benchmarks, such as Vicuna-80 (Chi- ang et al., 2023), Open-Assistant-953 (Kopf et al., 2023), User-Instructions-252 (Wang et al., 2022a), often leverage a small number of syntactic instruc- tions from LLMs as testing instances. All evalua- tion candidate LLMs are prompted with the same instructions to provide responses, which are then evaluated against human-based or LLMs-based evaluators. However, these types of benchmarks can only provide comparison several LLMs at a time, making it challenging to reveal a fair com- parison among a board range of LLMs, as well as incremental updates when new LLMs become available. AlpacaEval (Dubois et al., 2023) tackles this issue by reporting the Win Rate of the LLMs candidate to the reference LLM text-davinci-003. Accordingly, LLMs with higher Win Rate are gen- erally better than the ones with lower Win Rate. MT-Bench (Zheng et al., 2023) further increases the evaluation difï¬ culty by proposing 80 multi-turn evaluation instances and wishes LLMs could effec- tively capture context information in previous turns. FLASK (Ye et al., 2023b) proposed to provide ï¬ ne- grained evaluation towards aligned LLMs. FLASK includes 1,700 instances from 120 datasets. Each testing instance is labelled with a set of 12 founda- tional and essential â alignment skillsâ
|
2307.12966#36
|
2307.12966#38
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#38
|
Aligning Large Language Models with Human: A Survey
|
(e.g., logical thinking, user alignment, etc.). Accordingly, it is straightforward to evaluate LLMsâ capabilities on these skills separately. # 4.2 Evaluation Paradigm As open-ended benchmarks often do not have ref- erence answers, it is essential to rely on external human or LLMs evaluators. In this section, we will introduce both human- and LLMs-based evaluation paradigm. 4.2.1 Human-based Evaluation Automatic metrics, such as BLUE (Papineni et al., 2002) and ROUGE (Lin, 2004), require ground- truth references and have relatively low correlation with human judgments. Thus, they are not feasi- ble for evaluating responses to open-ended ques- tions. To bridge this gap, human annotators are used to evaluate the quality of open-ended model responses. Wang et al. (2022a); Wu et al. (2023) propose to evaluate the response quality in an ordi- nal classiï¬ cation setting where human annotators are instructed to categorize each response into one of the four levels (i.e., acceptable, minor errors, major errors and unacceptable), separately. How- ever, some other research have found that such classiï¬ cation annotation strategy heavily depend on the subjectivity of annotators, which can re- sult in poor inter-rater reliability (Kalpathy-Cramer et al., 2016). Accordingly Taori et al. (2023) pro- pose to use a pairwise comparison framework for evaluating the output quality of two LLMs systems. Given the instruction inputs and two model outputs, the human annotators are asked to select a better one. Furthermore, to accurately evaluate multiple LLMs, Zheng et al. (2023); Dettmers et al. (2023) further introduce the Elo rating system which calcu- lates the relative skill levels of players in zero-sum games such as chess games.
|
2307.12966#37
|
2307.12966#39
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#39
|
Aligning Large Language Models with Human: A Survey
|
Speciï¬ cally, in Elo system, the player scores are updated based on the result of each pairwise comparison and the current player scores. 4.2.2 LLMs-based Evaluation While human evaluations are often of high quality, it could be inefï¬ cient and expensive. In addition, the increasing quality of generated text from LLMs makes it more challenging for human annotators to distinguish between human-written and LLM- generated text in the open-ended NLP tasks (Clark et al., 2021). Given the strong text capability of LLMs, recent studies propose to incorporate LLMs into the output text evaluation in various NLP tasks without additional expensive references and hu- man efforts. Tang et al. (2023) propose to improve the traditional automatic metrics by increasing the number of references via LLMs-based paraphras- ing systems. However, such method still requires one reference for each evaluation instance. In con- trast, Liu et al. (2023e); Fu et al. (2023); Chen et al. (2023d); Chiang and Lee (2023) propose to directly use LLMs to evaluate the generated text quality without a single reference in a wide range of Natu- ral Language Generation (NLG) tasks.
|
2307.12966#38
|
2307.12966#40
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#40
|
Aligning Large Language Models with Human: A Survey
|
Speciï¬ cally, they construct complicated input instructions with tasks background and evaluation rules and prompt LLMs to follow these evaluation instructions to provide scores for output text. There are also some research efforts that propose LLMs-based evalua- tion framework for speciï¬ c NLG tasks, including text summarization Gao et al. (2023), code gener- ation (Zhuo, 2023), open-ended QA (Bai et al., 2023) and conversations (Lin and Chen, 2023). Due to the ï¬ exibility of prompts, it is also possible to conduct multi-dimensional evaluation towards the generated text (Lin and Chen, 2023; Fu et al., 2023). Min et al. (2023); Zha et al. (2023) propose to evaluate factual correctness using both closed- sourced and open-sourced LLMs. Similar to human evaluation, there are also research efforts in explic- itly prompting LLMs to conduct pairwise compar- isons. To compare the capabilities of two LLMs, instead of assigning scores separately, Dubois et al. (2023); Zheng et al. (2023) explicitly to prompt GPT-4 to select the better response for the same instruction inputs. LLMs Evaluation Bias Despite LLMs achieve impressive consistency with human judgment, Wang et al. (2023b) ï¬ nd that such LLM-based eval- uation paradigm suffers from a positional bias and those strong LLMs (i.e., GPT-4) tend to assign higher scores to the ï¬ rst appeared candidates. To calibrate such bias, they propose to a) repeat the LLM evaluation process multiple times with dif- ferent candidate ordering and b) explicitly prompt LLMs to provide chain-of-thoughts for the evalu- ation before assigning the actual score. (Wu and Aji, 2023) ï¬ nd that LLM-based evaluation prefer candidates with factual errors over shorter candi- dates and candidates with grammatical errors, de- spite the former one could impose greater danger than the latter ones. To address this bias, they pro- pose a multi-dimensional Elo rating system which separately evaluates the candidates from the per- spective of accuracy, helpfulness and language. Such approach allows a more comprehensive un- derstanding towards the candidates quality than previous one-shot evaluation.
|
2307.12966#39
|
2307.12966#41
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#41
|
Aligning Large Language Models with Human: A Survey
|
Concretely, (Zheng et al., 2023) systematically show the bias LLMs- based evaluation systems. On top of positional and length bias, they also discover Self-enhancement bias which means LLMs favor their own responses than the ones from other sources. To tackle these biases, their solutions include swapping responses, adding few-shot examples and leveraging CoT and references information. Evaluation-Speciï¬ c LLM Despite achieving high-quality automatic evaluation results, the above approaches heavily rely on state-of-the-art closed- source LLMs (e.g., GPT-4) which could result in data privacy issues. (Zheng et al., 2023) propose to train evaluation-speciï¬ c LLMs. PandaLM (Wang et al., 2023c) is such a specialized evaluation LLMs by ï¬ ne-tuning LLaMA-7B using around 300K high-quality synthetic evaluation instructions gen- erated from GPT-3.5. Speciï¬ cally, they ï¬
|
2307.12966#40
|
2307.12966#42
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#42
|
Aligning Large Language Models with Human: A Survey
|
rst collect large volumes of instructions as well as outputs from a diverse range of open-sourced LLMs, such as LLaMA-7B and Bloom-7B. They then prompt GPT-3.5 to analysis and evaluate the quality of a pair of outputs. Their results on human-annotated meta-evaluation shows that, despite bebing much smaller, PandaLM achieves on-par evaluation per- formance comparing to GPT-3.5 and GPT-4. # 5 Challenges and Future Directions The development of LLM alignment is still in a rudimentary stage and thus leaves much room for improvement. In this section, we summarize ex- isting important research efforts of aligning LLMs with human in Table 1. Below, we will discuss some of the challenges as well as the correspond- ing future research directions. Fine-grained Instruction Data Management While research on LLMs alignment have been un- precedentedly active, many of these research ef- forts propose to leverage training instructions from diverse sources, making it challenging to fairly compare among different methods. As discussed in Section 2.3, there are some interesting ï¬ nd- ings about the implication of particular instruction dataset. For example, FLAN and programming in- structions can improve reasoning capability aligned LLMs (Ghosal et al., 2023) and ShareGPT gen- eral performs well across a wide range of bench- marks (Wang et al., 2023d). However, there are still many issues in other aspects of instruction
|
2307.12966#41
|
2307.12966#43
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#43
|
Aligning Large Language Models with Human: A Survey
|
â Aligned LLM See Tang _â TaNT ELMS â Training Sai Torreon â NLP Benchmarks Haman Annotations Haman Eval â to, Benchmark Eval TM Eat Tat a NoMa oT Ts Davnabs Va ch mise EN hasta srt meas sprint tne co tanta coin. a. 130 ex EMA ser â LLaa-oF84 m ENON LLaMA ser Ta vaso , sn tins EMA Proein 13m iguaâ LIGMA gp ULL i 18 EN LLaMA ser tie inscal.2m%) â | TR.1SR3R EN LLAMA Ros, LRA so mne.s08 «EN LLAMA ser Wind Ish EN Cake Sito SE peach bn EN LLAMA Langs x amas etn Iom.a086sB EN LLAMA haa,â Mt SELINSTRUCT Mena cean, 2023 pon wer ser LACUNA Gimatsta.am) | a8 EN Vins att â Multilingual {aM â 0 astionX Lita mm stttigua EMA a os Oe 18 EN LhaMA ser SSIM.I3R EN Coke Line SEP TRIB. ENC Chins LLAMA LaRA me ey Lanta ser m EN Alps ser osu ey Hawa ser wamah EN hi ser m8 EN LLAMA Rv â â un spas Cte Alps 78,138,308, 658 EN Haws ser GPTH-Atpaca,SelPinseuet bn EN LLAMA Langage Alpes 7B, 131 â Multilingual aM Alpaca 1 Meligpesl UM SFT Ger 3.5 tmeractive Tansation Nombat (Yuan et al., 2023) paca a Alpaca Wont (an 2023 m0 wank cn Lamin-im (Wa etal, 2023) 078 EN T5Flan srr sell Â¥ Â¥ â Ruthor Verification x Â¥ x TOK ShareGPT x x Viouna-80 5 1G, ShareGPT, Dolly Bloome-: â Stack Overlow * * use 252 una x x tone?
|
2307.12966#42
|
2307.12966#44
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#44
|
Aligning Large Language Models with Human: A Survey
|
Vicuna-80 x ShareGPT Volunteers x x x x Truthful QA x Quora Questions x x 51 Jo Annotatrs * ShacsGPT Pairwise Comparison * Humantval, MBPP * * * Humana, DS-1000 x 3 x MMLU FLAN ipa MMLU x Vieuna ShareGPT x MMLU â MMLU, BH, DROP SharsGPT x x x x FLAN x x * "Sk Onan x Hunan pCLuE x x Cal * * Use x x x x x * TauhiuQa, BBE LLM Hames x FLAN, Maths, Code MMLU. FLAN, Co ve x PT, Summary on Ap igual Translation tined Translation x x x â Translation Quality x P3, FLAN x Human Rating LLM harness x # GPT 4 Multilingual Vicuna-80 GPT-4, Vicuna-80 WizedLM-218, Awesome-164 Table 1: An overview of popular aligned LLMs, including their Size, supported languages, initial LLMs, alignment training method, alignment data, and alignment evaluation. data management remaining unclear, including the optimal quality control towards instruction data, optimal instruction training sequence, how to effec- tively mix-up different instructions. These research efforts could ï¬ nally enable ï¬ ne-grained instruction management, allowing researchers and practition- ers to construct high-quality instruction data.
|
2307.12966#43
|
2307.12966#45
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#45
|
Aligning Large Language Models with Human: A Survey
|
LLMs Alignment for non-English Languages Most of existing research in LLMs alignment are English-dominated. While many approaches, such as complex instruction generation (Xu et al., 2023b) and explanation tuning (Mukherjee et al., 2023), are language-agnostic, they only explore English- based prompts and it is unclear how well these prompts perform when adapting to other languages, severely hindering the application of LLMs to non- It is interesting to see 1) how English regions. these alignment technologies perform in various languages, in particular low-resource languages, and 2) how to effectively transfer the effect of LLMs alignment across different languages. LLMs Alignment Training Technologies As shown in Table 1, most of existing aligned LLMs are based on the simple SFT technology. However, SFT does not explicitly incorporate human prefer- ence into LLMs. As a result, aligning LLMs solely based on SFT could require a lot more instruction data and training resources. In general, there is a lacking of comprehensive investigation over the ef- fect of various training technologies to incorporate human preference into LLMs. Thus, it is critical to come up with resource-constrained LLM align- ment training framework where certain alignment resources are given at a certain level (e.g., maxi- mum 10K instructions, 5 hours training time, etc.), allowing researchers and practitioners to verify the effectiveness of various training methods. As in- creasing number of instruction data have become available, this exploration could further promote effective and environmental-friendly LLMs align- ment solutions. Human-in-the-loop LLMs Alignment Data Generation Table 1 has shown that ShareGPT data has been widely adapted for LLMs alignment. The preliminary analysis in Wang et al. (2023d) also reveal that ShareGPT performs consistly well across a wide range of NLP tasks. These results indicate that human is still a key factor in improv- ing LLMs alignment quality. Different from tra- ditional human annotation framework where hu- man provides annotation based on the instructions, ShareGPT is a human-in-the-loop alignment solu- tion where human can freely determine what LLMs should generate. This shows the great potential of human-in-the-loop data generation solution in LLMs alignment.
|
2307.12966#44
|
2307.12966#46
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#46
|
Aligning Large Language Models with Human: A Survey
|
It will be interesting to explore other types of human-in-the-loop solutions to fur- ther facilitate LLMs alignment. Human-LLM Joint Evaluation Framework Existing LLM evaluation frameworks either use LLMs for effective evaluation or leverage crowd- sourcing for high-quality evaluation. As shown in (Wu and Aji, 2023; Liu et al., 2023e), state-of- the-art LLMs have demonstrated similar or supe- rior evaluation capability in various NLP tasks. It is feasible to use LLMs as special evaluation an- notators and develop LLM-human joint evaluation framework where LLMs and human are assigned with different evaluation tasks based on their own strengths to maintain both efï¬ ciency and quality of the evaluation procedure for LLM alignment . # 6 Conclusion This survey provides an up-to-date review to recent advances of LLMs alignment technologies. We summarize these research efforts into Alignment In- struction Collection, Alignment Training and Align- ment Evaluation. Finally, we pointed out several promising future directions for LLMs alignment. We hope this survey could provide insightful per- spectives and inspire further research in improving LLMs alignment.
|
2307.12966#45
|
2307.12966#47
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#47
|
Aligning Large Language Models with Human: A Survey
|
# References Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, and Melisa Russak. 2023. Becoming self-instruct: intro- ducing early stopping criteria for minimal instruct tuning. arXiv preprint arXiv:2307.03692. Yuvanesh Anand, Zach Nussbaum, Brandon Dud- erstadt, Benjamin Schmidt, and Andriy Mulyar. 2023.
|
2307.12966#46
|
2307.12966#48
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#48
|
Aligning Large Language Models with Human: A Survey
|
Gpt4all: Training an assistant-style chatbot with large scale data distillation from gpt-3.5-turbo. https://github.com/nomic-ai/gpt4all. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language mod- els. arXiv preprint arXiv:2108.07732. Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-david, Canwen Xu, Gun- jan Chhablani, Han Wang, Jason Fries, Maged Al- shaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir Radev, Mike Tian-jian Jiang, and Alexander Rush. 2022. Prompt- Source: An integrated development environment and repository for natural language prompts.
|
2307.12966#47
|
2307.12966#49
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#49
|
Aligning Large Language Models with Human: A Survey
|
In Pro- ceedings of the 60th Annual Meeting of the Associa- tion for Computational Linguistics: System Demon- strations, pages 93â 104, Dublin, Ireland. Associa- tion for Computational Linguistics. Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, et al. 2023. Benchmarking foundation models with language-model-as-an-examiner. arXiv preprint arXiv:2306.04181. BIG bench authors. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of lan- guage models. Transactions on Machine Learning Research. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert- Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877â
|
2307.12966#48
|
2307.12966#50
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#50
|
Aligning Large Language Models with Human: A Survey
|
1901. Curran Associates, Inc. Yihan Cao, Yanbin Kang, and Lichao Sun. 2023. In- struction mining: High-quality instruction data se- lection for large language models. arXiv preprint arXiv:2307.06290. Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. 2023. A sur- vey on evaluation of large language models. arXiv preprint arXiv:2307.03109. Jiaao Chen, Aston Zhang, Xingjian Shi, Mu Li, Alex Smola, and Diyi Yang. 2023a.
|
2307.12966#49
|
2307.12966#51
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#51
|
Aligning Large Language Models with Human: A Survey
|
Parameter-efï¬ cient ï¬ ne-tuning design spaces. In The Eleventh Interna- tional Conference on Learning Representations. Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srini- vasan, Tianyi Zhou, Heng Huang, et al. 2023b. Al- pagasus: Training a better alpaca with fewer data. arXiv preprint arXiv:2307.08701. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Evaluating large lan- Brockman, et al. 2021. arXiv preprint guage models trained on code. arXiv:2107.03374. Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. 2023c. Teaching large language mod- els to self-debug. arXiv preprint arXiv:2304.05128. Yi Chen, Rui Wang, Haiyun Jiang, Shuming Shi, and Ruifeng Xu. 2023d. Exploring the use of large lan- guage models for reference-free text quality evalua- tion: A preliminary empirical study. arXiv preprint arXiv:2304.00723. Zhihong Chen, Feng Jiang, Junying Chen, Tiannan Wang, Fei Yu, Guiming Chen, Hongbo Zhang, Juhao Liang, Chen Zhang, Zhiyi Zhang, Jianquan Li, Xi- ang Wan, Benyou Wang, and Haizhou Li. 2023e. Phoenix: Democratizing chatgpt across languages. CoRR, abs/2304.10453. Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evalua- tions? arXiv preprint arXiv:2305.01937. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E.
|
2307.12966#50
|
2307.12966#52
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#52
|
Aligning Large Language Models with Human: A Survey
|
Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality. Elizabeth Clark, Tal August, Soï¬ a Serrano, Nikita Haduong, Suchin Gururangan, and Noah A. Smith. 2021. All thatâ s â humanâ is not gold: Evaluating hu- man evaluation of generated text. In Annual Meeting of the Association for Computational Linguistics. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021.
|
2307.12966#51
|
2307.12966#53
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#53
|
Aligning Large Language Models with Human: A Survey
|
Training veriï¬ ers to solve math word problems. arXiv preprint arXiv:2110.14168. Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. 2023. Free dolly: Introducing the worldâ s ï¬ rst truly open instruction- tuned llm. Yiming Cui, Ziqing Yang, and Xin Yao. 2023a.
|
2307.12966#52
|
2307.12966#54
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#54
|
Aligning Large Language Models with Human: A Survey
|
Efï¬ - cient and effective text encoding for chinese llama and alpaca. arXiv preprint arXiv:2304.08177. Yiming Cui, Ziqing Yang, and Xin Yao. 2023b. Efï¬ - cient and effective text encoding for chinese llama and alpaca. arXiv preprint arXiv:2304.08177. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efï¬ cient arXiv preprint ï¬ netuning of quantized llms. arXiv:2305.14314. Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. 2023. Enhancing chat language models by scaling high-quality instructional conver- sations. arXiv preprint arXiv:2305.14233. Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. 2023.
|
2307.12966#53
|
2307.12966#55
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#55
|
Aligning Large Language Models with Human: A Survey
|
Raft: Reward ranked ï¬ netuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767. Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Al- pacafarm: A simulation framework for methods arXiv preprint that learn from human feedback. arXiv:2305.14387.
|
2307.12966#54
|
2307.12966#56
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#56
|
Aligning Large Language Models with Human: A Survey
|
Sergey Edunov, Myle Ott, Michael Auli, David Grang- ier, and Marcâ Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to se- In Proceedings of the 2018 Con- quence learning. ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 355â 364, New Orleans, Louisiana. Association for Computational Linguistics.
|
2307.12966#55
|
2307.12966#57
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#57
|
Aligning Large Language Models with Human: A Survey
|
Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166. Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, and Xiaojun Wan. 2023. Human- like summarization evaluation with chatgpt. arXiv preprint arXiv:2304.02554. Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wal- lace, Pieter Abbeel, Sergey Levine, and Dawn Song. 2023.
|
2307.12966#56
|
2307.12966#58
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#58
|
Aligning Large Language Models with Human: A Survey
|
Koala: A dialogue model for academic re- search. Blog post. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the As- sociation for Computational Linguistics, 9:346â 361. Deepanway Ghosal, Yew Ken Chia, Navonil Ma- jumder, and Soujanya Poria. 2023. Flacuna: Un- leashing the problem solving power of vicuna using ï¬ an ï¬ ne-tuning. arXiv preprint arXiv:2307.02053.
|
2307.12966#57
|
2307.12966#59
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#59
|
Aligning Large Language Models with Human: A Survey
|
Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, et al. 2023. Textbooks are all you need. arXiv preprint arXiv:2306.11644. Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg- Kirkpatrick, and Graham Neubig. 2022. Towards a uniï¬ ed view of parameter-efï¬ cient transfer learning. In International Conference on Learning Represen- tations.
|
2307.12966#58
|
2307.12966#60
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#60
|
Aligning Large Language Models with Human: A Survey
|
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Stein- hardt. 2021. Measuring massive multitask lan- In International Conference guage understanding. on Learning Representations. Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. 2022. Unnatural instructions: Tuning lan- guage models with (almost) no human labor. CoRR, abs/2212.09689. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations.
|
2307.12966#59
|
2307.12966#61
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#61
|
Aligning Large Language Models with Human: A Survey
|
Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Jiayi Lei, Yao Chuancheng Lv, Yikai Zhang, Fu, Maosong Sun, and Junxian He. 2023. C- eval: A multi-level multi-discipline chinese evalu- ation suite for foundation models. arXiv preprint arXiv:2305.08322.
|
2307.12966#60
|
2307.12966#62
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#62
|
Aligning Large Language Models with Human: A Survey
|
Sophie Jentzsch and Kristian Kersting. 2023. Chat- gpt is fun, but it is not funny! humor is still challenging large language models. arXiv preprint arXiv:2306.04563. Yunjie Ji, Yan Gong, Yong Deng, Yiping Peng, Qiang Niu, Baochang Ma, and Xiangang Li. 2023. To- wards better instruction following language models for chinese: Investigating the impact of training data and evaluation. CoRR, abs/2304.07854. Yuxin Jiang, Chunkit Chan, Mingyang Chen, and Wei Wang. 2023.
|
2307.12966#61
|
2307.12966#63
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#63
|
Aligning Large Language Models with Human: A Survey
|
Lion: Adversarial distillation ArXiv, of closed-source large language model. abs/2305.12870. Jayashree Kalpathy-Cramer, J. Peter Campbell, Deniz Erdogmus, Peng Tian, Dharanish Kedarisetti, Chace Moleta, James D. Reynolds, Kelly Hutcheson, Michael J. Shapiro, Michael X. Repka, Philip Fer- rone, Kimberly Drenser, Jason Horowitz, Kemal Sonmez, Ryan Swan, Susan Ostmo, Karyn E. Jonas, R.V. Paul Chan, Michael F. Chiang, Michael F. Chi- ang, Susan Ostmo, Kemal Sonmez, J. Peter Camp- bell, R.V. Paul Chan, Karyn Jonas, Jason Horowitz, Osode Coki, Cheryl-Ann Eccles, Leora Sarna, Au- dina Berrocal, Catherin Negron, Kimberly Denser, Kristi Cumming, Tammy Osentoski, Tammy Check, Mary Zajechowski, Thomas Lee, Evan Kruger, Kathryn McGovern, Charles Simmons, Raghu Murthy, Sharon Galvis, Jerome Rotter, Ida Chen, Xiaohui Li, Kent Taylor, Kaye Roll, Jayashree Kalpathy-Cramer, Deniz Erdogmus, Maria Ana Martinez-Castellanos, Samantha Salinas-Longoria, Rafael Romero, Andrea Arriola, Francisco Olguin- Manriquez, Miroslava Meraz-Gutierrez, Carlos M. Dulanto-Reinoso, and Cristina Montero-Mendoza. 2016.
|
2307.12966#62
|
2307.12966#64
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#64
|
Aligning Large Language Models with Human: A Survey
|
Plus disease in retinopathy of prematurity: Improving diagnosis by ranking disease severity and using quantitative image analysis. Ophthalmology, 123(11):2345â 2351. Andreas Kopf, Yannic Kilcher, Dimitri von Rutte, Sotiris Anagnostidis, Zhi Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stan- ley, Richâ ard Nagyï¬ , ES Shahul, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. 2023.
|
2307.12966#63
|
2307.12966#65
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#65
|
Aligning Large Language Models with Human: A Survey
|
Openassistant conversations - de- mocratizing large language model alignment. ArXiv, abs/2304.07327. Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Scott Wen tau Yih, Daniel Fried, Sida Wang, and Tao Yu. 2022. Ds- 1000: A natural and reliable benchmark for data sci- ence code generation. ArXiv, abs/2211.11501.
|
2307.12966#64
|
2307.12966#66
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#66
|
Aligning Large Language Models with Human: A Survey
|
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efï¬ cient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045â 3059, Online and Punta Cana, Domini- can Republic. Association for Computational Lin- guistics. Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. 2023a. CAMEL: communicative agents for "mind" exploration of large scale language model society. CoRR, abs/2303.17760. Haonan Li, Fajri Koto, Minghao Wu, Alham Fikri Aji, and Timothy Baldwin. 2023b.
|
2307.12966#65
|
2307.12966#67
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#67
|
Aligning Large Language Models with Human: A Survey
|
Bactrian- x: A multilingual replicable instruction-following arXiv preprint model with low-rank adaptation. arXiv:2305.15011. Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Bald- win. 2023c. Cmmlu: Measuring massive multitask language understanding in chinese. arXiv preprint arXiv:2306.09212. Xiang Lisa Li and Percy Liang. 2021.
|
2307.12966#66
|
2307.12966#68
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#68
|
Aligning Large Language Models with Human: A Survey
|
Preï¬ x-tuning: In Optimizing continuous prompts for generation. Proceedings of the the 59th Annual Meeting of Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 4582â 4597, Online. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74â
|
2307.12966#67
|
2307.12966#69
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#69
|
Aligning Large Language Models with Human: A Survey
|
81, Barcelona, Spain. Association for Computational Linguistics. Llm- eval: Uniï¬ ed multi-dimensional automatic evalua- tion for open-domain conversations with large lan- guage models. arXiv preprint arXiv:2305.13711. Chuang Liu, Renren Jin, Yuqi Ren, Linhao Yu, Tianyu Dong, Xiaohan Peng, Shuting Zhang, Jianxiang Peng, Peiyi Zhang, Qingqing Lyu, et al. 2023a. M3ke: A massive multi-level multi-subject knowl- edge evaluation benchmark for chinese large lan- guage models. arXiv preprint arXiv:2305.10263. Hao Liu, Xinyang Geng, Lisa Lee, Igor Mordatch, Sergey Levine, Sharan Narang, and P. Abbeel. 2022a. Towards better few-shot and ï¬ netuning per- formance with forgetful causal language models. Hao Liu, Carmelo Sferrazza, and Pieter Abbeel. 2023b. Languages are rewards:
|
2307.12966#68
|
2307.12966#70
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#70
|
Aligning Large Language Models with Human: A Survey
|
Hindsight ï¬ netuning using human feedback. arXiv preprint arXiv:2302.02676. Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. 2023c. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. arXiv preprint arXiv:2305.01210. Ruibo Liu, Chenyan Jia, Ge Zhang, Ziyu Zhuang, Tony X Liu, and Soroush Vosoughi. 2022b.
|
2307.12966#69
|
2307.12966#71
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#71
|
Aligning Large Language Models with Human: A Survey
|
Second thoughts are best: Learning to re-align with human values from text edits. In Advances in Neural Infor- mation Processing Systems. Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi. 2023d. Training socially aligned language models in simulated human society. arXiv preprint arXiv:2305.16960. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023e.
|
2307.12966#70
|
2307.12966#72
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#72
|
Aligning Large Language Models with Human: A Survey
|
Gpteval: Nlg evaluation using gpt-4 with better human align- ment. arXiv preprint arXiv:2303.16634. Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022c. BRIO: Bringing order to abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 2890â 2903, Dublin, Ireland. Association for Computational Lin- guistics. Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. 2023.
|
2307.12966#71
|
2307.12966#73
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#73
|
Aligning Large Language Models with Human: A Survey
|
The ï¬ an col- lection: Designing data and methods for effective in- struction tuning. arXiv preprint arXiv:2301.13688. Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xi- ubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023. Wizardcoder: Empowering code large language models with evol- instruct. arXiv preprint arXiv:2306.08568. Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, and Sayak Paul. 2022. Peft: State- of-the-art parameter-efï¬ cient ï¬ ne-tuning methods. https://github.com/huggingface/peft. Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2023.
|
2307.12966#72
|
2307.12966#74
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#74
|
Aligning Large Language Models with Human: A Survey
|
Factscore: Fine-grained atomic evaluation of fac- tual precision in long form text generation. arXiv preprint arXiv:2305.14251. Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022. Cross-task generaliza- tion via natural language crowdsourcing instructions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3470â
|
2307.12966#73
|
2307.12966#75
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#75
|
Aligning Large Language Models with Human: A Survey
|
3487, Dublin, Ireland. Association for Computational Linguistics. Niklas Muennighoff, Alexander M Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Nouamane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. 2023. Scaling data-constrained language models. arXiv preprint arXiv:2305.16264. Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawa- har, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. 2023.
|
2307.12966#74
|
2307.12966#76
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#76
|
Aligning Large Language Models with Human: A Survey
|
Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint arXiv:2306.02707. Huu Nguyen, Sameer Suri, Ken Tsui, and Christoph Schuhmann. 2023. The oig dataset. Tung Nguyen, Qinqing Zheng, and Aditya Grover. 2022. Conserweightive behavioral cloning for reli- able ofï¬ ine reinforcement learning. arXiv preprint arXiv:2210.05158.
|
2307.12966#75
|
2307.12966#77
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#77
|
Aligning Large Language Models with Human: A Survey
|
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Gray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Infor- mation Processing Systems. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- In Proceedings of uation of machine translation. the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311â
|
2307.12966#76
|
2307.12966#78
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#78
|
Aligning Large Language Models with Human: A Survey
|
318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Baolin Peng, Chunyuan Li, Pengcheng He, Michel Gal- ley, and Jianfeng Gao. 2023. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277. Rafael Rafailov, Archit Sharma, Eric Mitchell, Ste- fano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290.
|
2307.12966#77
|
2307.12966#79
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#79
|
Aligning Large Language Models with Human: A Survey
|
Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. 2023. Pref- erence ranking optimization for human alignment. arXiv preprint arXiv:2306.17492. Xianghui Sun, Yunjie Ji, Baochang Ma, and Xian- gang Li. 2023a. A comparative study between full- parameter and lora-based ï¬ ne-tuning on chinese in- struction data for instruction following large lan- guage model. arXiv preprint arXiv:2304.08109. Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David D. Cox, Yiming Yang, and Chuang Gan. 2023b.
|
2307.12966#78
|
2307.12966#80
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#80
|
Aligning Large Language Models with Human: A Survey
|
Principle-driven self-alignment of language models from scratch with minimal human supervision. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, , and Jason Wei. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge.
|
2307.12966#79
|
2307.12966#81
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#81
|
Aligning Large Language Models with Human: A Survey
|
In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149â 4158, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. Tianyi Tang, Hongyuan Lu, Yuchen Eleanor Jiang, Haoyang Huang, Dongdong Zhang, Wayne Xin Zhao, and Furu Wei. 2023. Not all metrics are guilty: Improving nlg evaluation with llm paraphras- ing. arXiv preprint arXiv:2305.15067. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023.
|
2307.12966#80
|
2307.12966#82
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#82
|
Aligning Large Language Models with Human: A Survey
|
Stanford al- paca: An instruction-following llama model. https: //github.com/tatsu-lab/stanford_alpaca. Introducing mpt-30b: Raising the bar for open-source foundation models. Accessed: 2023-06-22. Guan Wang, Sijie Cheng, Qiying Yu, and Changling Liu. 2023a. OpenChat: Advancing Open-source Language Models with Imperfect Data. Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023b. Large language models are not fair eval- uators. arXiv preprint arXiv:2305.17926. Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. 2023c.
|
2307.12966#81
|
2307.12966#83
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#83
|
Aligning Large Language Models with Human: A Survey
|
Pandalm: An automatic evaluation benchmark for llm instruction tuning optimization. arXiv preprint arXiv:2306.05087. Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. 2023d. How far can camels go? exploring the state of instruction tuning on open re- sources. arXiv preprint arXiv:2306.04751. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Al- isa Liu, Noah A. Smith, Daniel Khashabi, and Han- naneh Hajishirzi. 2022a.
|
2307.12966#82
|
2307.12966#84
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#84
|
Aligning Large Language Models with Human: A Survey
|
Self-instruct: Aligning lan- guage model with self generated instructions. CoRR, abs/2212.10560. Yizhong Wang, Swaroop Mishra, Pegah Alipoormo- labashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Arjun Ashok, Arut Selvan Dhanasekaran, An- jana Arunkumar, David Stap, Eshaan Pathak, Gi- annis Karamanolakis, Haizhi Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Kuntal Kumar Pal, Maitreya Patel, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Savan Doshi, Shailaja Keyur Sampat, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. 2022b.
|
2307.12966#83
|
2307.12966#85
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#85
|
Aligning Large Language Models with Human: A Survey
|
Super-NaturalInstructions: General- ization via declarative instructions on 1600+ NLP In Proceedings of the 2022 Conference on tasks. Empirical Methods in Natural Language Processing, pages 5085â 5109, Abu Dhabi, United Arab Emi- rates. Association for Computational Linguistics. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022a.
|
2307.12966#84
|
2307.12966#86
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#86
|
Aligning Large Language Models with Human: A Survey
|
Finetuned language models are zero-shot learners. In International Con- ference on Learning Representations. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language mod- els. In Advances in Neural Information Processing Systems. Minghao Wu and Alham Fikri Aji. 2023.
|
2307.12966#85
|
2307.12966#87
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#87
|
Aligning Large Language Models with Human: A Survey
|
Style over substance: Evaluation biases for large language models. ArXiv, abs/2307.03025. Minghao Wu, Abdul Waheed, Chiyu Zhang, Muham- mad Abdul-Mageed, and Alham Fikri Aji. 2023. Lamini-lm: A diverse herd of distilled models from large-scale instructions. CoRR, abs/2304.14402. Benfeng Xu, An Yang, Junyang Lin, Quan Wang, Chang Zhou, Yongdong Zhang, and Zhendong Mao. 2023a.
|
2307.12966#86
|
2307.12966#88
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#88
|
Aligning Large Language Models with Human: A Survey
|
Expertprompting: Instructing large language models to be distinguished experts. arXiv preprint arXiv:2305.14688. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023b. Wizardlm: Empowering large lan- guage models to follow complex instructions. Canwen Xu, Daya Guo, Nan Duan, and Julian J. McAuley. 2023c.
|
2307.12966#87
|
2307.12966#89
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#89
|
Aligning Large Language Models with Human: A Survey
|
Baize: An open-source chat model with parameter-efï¬ cient tuning on self-chat data. CoRR, abs/2304.01196. Seonghyeon Ye, Yongrae Jo, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, and Minjoon Seo. 2023a. Iterative self-revising llm empowered by Selfee: self-feedback generation. Blog post. Seonghyeon Ye, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, and Minjoon Seo. 2023b.
|
2307.12966#88
|
2307.12966#90
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#90
|
Aligning Large Language Models with Human: A Survey
|
Flask: Fine-grained language model evaluation based on alignment skill sets. Jifan Yu, Xiaozhi Wang, Shangqing Tu, Shulin Cao, Daniel Zhang-Li, Xin Lv, Hao Peng, Zijun Yao, Xi- aohan Zhang, Hanming Li, et al. 2023a. Kola: Care- fully benchmarking world knowledge of large lan- guage models. arXiv preprint arXiv:2306.09296.
|
2307.12966#89
|
2307.12966#91
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#91
|
Aligning Large Language Models with Human: A Survey
|
Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, and Chao Zhang. 2023b. Large language model as attributed training data generator: A tale of diversity and bias. arXiv preprint arXiv:2306.15895. Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. 2023. Rrhf: Rank responses to align language models with human feedback without tears. Yuheng Zha, Yichi Yang, Ruichen Li, and Zhiting Hu. 2023. Alignscore: Evaluating factual consistency with a uniï¬ ed alignment function. arXiv preprint arXiv:2305.16739. Ge Zhang, Yemin Shi, Ruibo Liu, Ruibin Yuan, Yizhi Li, Siwei Dong, Yu Shu, Zhaoqun Li, Zekun Wang, Chenghua Lin, Wen-Fen Huang, and Jie Fu. 2023a. Chinese open instruction generalist: A preliminary release. ArXiv, abs/2304.07987. Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. 2023b. Adaptive budget allocation for In The Eleventh parameter-efï¬ cient ï¬ ne-tuning. International Conference on Learning Representa- tions. Shaolei Zhang, Qingkai Fang, Zhuocheng Zhang, Zhengrui Ma, Yan Zhou, Langlin Huang, Mengyu Bu, Shangtong Gui, Yunji Chen, Xilin Chen, and Yang Feng. 2023c. Bayling: Bridging cross-lingual alignment and instruction following through interac- tive translation for large language models. ArXiv, abs/2306.10968. Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- In International uating text generation with bert. Conference on Learning Representations. Yao Zhao, Mikhail Khalman, Rishabh Joshi, Shashi Narayan, Mohammad Saleh, and Peter J Liu. 2023.
|
2307.12966#90
|
2307.12966#92
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#92
|
Aligning Large Language Models with Human: A Survey
|
Calibrating sequence likelihood improves condi- In The Eleventh Inter- tional language generation. national Conference on Learning Representations. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685. Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023.
|
2307.12966#91
|
2307.12966#93
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#93
|
Aligning Large Language Models with Human: A Survey
|
Agieval: A human- centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2023. Lima: Less is more for align- ment. arXiv preprint arXiv:2305.11206.
|
2307.12966#92
|
2307.12966#94
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#94
|
Aligning Large Language Models with Human: A Survey
|
Terry Yue Zhuo. 2023. Large language models are state-of-the-art evaluators of code generation. arXiv preprint arXiv:2304.14317. Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Chris- tiano, and Geoffrey Irving. 2019. Fine-tuning lan- arXiv guage models from human preferences. preprint arXiv:1909.08593. # A Appendix Table 2: The outputs of original LLaMA and Chinese Tokenizer. This example is from Cui et al. (2023b). Inputs: äººå·¥æ ºè ½æ ¯è®¡ç® æ ºç§ å ¦ã å¿ ç å ¦ã å ²å ¦ç å ¦ç§ è å ç 交å å ¦ç§ ã LLaMA: _, 人, å·¥, æ º, è ½, æ ¯, 计, ç® , æ º, ç§ , å ¦, ã , å¿ , ç , å ¦, ã , 0xE5, 0x93, 0xB2, å ¦, ç , å ¦, ç§ , 0xE8, 0x9E, 0x8D, å , ç , 交, 0xE5, 0x8F, 0x89, å ¦, ç§ , ã Chinese: _, äººå·¥æ ºè ½, æ ¯, è®¡ç® æ º, ç§ å ¦, ã , å¿ ç å ¦, ã , å ²å ¦, ç , å ¦ç§ , è å , ç , 交å , å ¦ç§ , ã # A.1 Training Language-Speciï¬ c LLMs Existing LLMs described above are mostly English- oriented. Thus, it becomes necessary to adapt the superior linguistic ability to other languages.
|
2307.12966#93
|
2307.12966#95
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#95
|
Aligning Large Language Models with Human: A Survey
|
Ji et al. (2023); Cui et al. (2023b) demonstrate ex- isting English-dominated LLaMA has less than 1,000 Chinese characters in its vocabulary and LLaMA has to represent Chinese characters us- ing the byte-based fallback strategy, which signif- icantly increases input length and decreases the inference efï¬ ciency. As shown in Table 2, com- pared to the default LLaMA tokenizer, the special- ized Chinese tokenizer trained using large-scale Chinese corpus can produce more compact and se- mantically meaningful token representations (e.g., long and complex Chinese phrases). To leverage the linguistic knowledge in orginal LLaMA, Cui et al. (2023b) propose a two-stage Chinese pre- training solution to enable LLaMA to better un- derstand Chinese inputs. Before training they ï¬ rst add 20K Chinese words and phrases into the ex- isting LLaMA vocabulary. In the ï¬ rst stage, they only train the input word embeddings and keep the rest parameters in LLaMA frozen. In the second stage, to save training resources, they add LoRA parameters and jointly train the parameters in the input word embeddings, self-attentive heads and LoRA parameters.
|
2307.12966#94
|
2307.12966#96
|
2307.12966
|
[
"2307.03109"
] |
2307.12966#96
|
Aligning Large Language Models with Human: A Survey
|
Ji et al. (2023) also report the beneï¬ ts of such strategy under a GPT-4 evaluation framework.
|
2307.12966#95
|
2307.12966
|
[
"2307.03109"
] |
|
2307.12950#0
|
RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment
|
3 2 0 2 g u A 8 1 ] L C . s c [ 2 v 0 5 9 2 1 . 7 0 3 2 : v i X r a # RLCD: REINFORCEMENT LEARNING FROM CONTRAST DISTILLATION FOR LANGUAGE MODEL ALIGNMENT Kevin Yang1,2 Dan Klein2 Asli Celikyilmaz1 Nanyun Peng3 Yuandong Tian1 1Meta AI, 2UC Berkeley, 3UCLA {yangk,klein}@berkeley.edu,{aslic,yuandong}@meta.com,[email protected] # ABSTRACT We propose Reinforcement Learning from Contrast Distillation (RLCD), a method for aligning language models to follow natural language principles without using human feedback. RLCD trains a preference model using simulated preference pairs that contain both a high-quality and low-quality example, generated using contrasting positive and negative prompts. The preference model is then used to improve a base unaligned language model via reinforcement learning. Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context distillation (Huang et al., 2022) baselines across three diverse alignment tasksâ harmlessness, helpfulness, and story outline generationâ and on both 7B and 30B model scales for preference data simulation. # INTRODUCTION Reinforcement Learning from Human Feedback (RLHF) has recently been used to great effect to align pretrained large language models (LLMs) to human preferences, optimizing for desirable qualities like harmlessness and helpfulness (Bai et al., 2022a) and achieving state-of-the-art results across a variety of natural language tasks (OpenAI, 2023). A standard RLHF procedure fine-tunes an initial unaligned LLM using an RL algorithm such as PPO (Schulman et al., 2017), optimizing the LLM to align with human preferences. RLHF is thus critically dependent on a reward model derived from human-labeled preferences, typically pairwise preferences on LLM outputs (o1, o2) generated from a shared prompt p. However, collecting human pairwise preference data, especially high-quality data, may be expensive and time consuming at scale.
|
2307.12950#1
|
2307.12950
|
[
"2302.13971"
] |
|
2307.12950#1
|
RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment
|
To address this problem, approaches have been proposed to obtain labels without human annotation, such as Reinforcement Learning from AI Feedback (RLAIF) and context distillation. RLAIF approaches (e.g., Bai et al. (2022b)) simulate human pairwise preferences by scoring o1 and o2 with an LLM (Figure 1 center); the scoring LLM is often the same as the one used to generate the original pairs (o1, o2). Of course, the resulting LLM pairwise preferences will be somewhat noisier compared to human labels. However, this problem is exacerbated by using the same prompt p to generate both o1 and o2, causing o1 and o2 to often be of very similar quality and thus hard to differentiate (e.g., Table 1). Consequently, training signal can be overwhelmed by label noise, yielding lower-quality preference data. Meanwhile, context distillation methods (e.g., Sun et al. (2023)) create more training signal by modifying the initial prompt p. The modified prompt p+ typically contains additional context encouraging a directional attribute change in the output o+ (Figure 1 right). However, context distillation methods only generate a single output o+ per prompt p+, which is then used for supervised fine-tuning, losing the pairwise preferences which help RLHF-style approaches to derive signal from the contrast between outputs. Multiple works have observed that RL approaches using preference models for pairwise preferences can substantially improve over supervised fine-tuning by itself when aligning LLMs (Ouyang et al., 2022; Dubois et al., 2023). Therefore, while both RLAIF and context distillation approaches have already been successfully applied in practice to align language models, we posit that it may be even more effective to combine
|
2307.12950#0
|
2307.12950#2
|
2307.12950
|
[
"2302.13971"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.