doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2307.14984
38
Underpinning this sophisticated mechanism is the profound cognitive and behavioral comprehension of LLMs. The LLM is prompted with these details and is then responsible for deciding how the content should be shaped in response to the event. Our aim is to minimize manual intervention as much as possible, to highlight the capability of LLMs in simulating authentic user-generated content. The approach mirrors the way real-world users form their posts in response to distinct events, aligning the text generation process with the emotional or attitudinal dynamics of users. In this manner, we have been successful in utilizing LLMs to emulate the content creation process on social networks with high fidelity. # 4.4.2 Interaction Behavior During the simulation, when a user receives a message from one of their followees, a critical decision needs to be made—whether to repost/post or not. That is to say, the interaction behavior includes reposting (forwarding) the original content and posting new content about the same social event. The user’s interaction behavior plays a pivotal role in propagating messages to the user’s followers, facilitating the spread of information within the social network. However, modeling the complex mechanisms governing a user’s interaction behavior poses significant challenges. To address it, we employ large language models to capture the intricate relationship between the user, post features, and interaction behavior.
2307.14984#38
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
38
# 3.3.2 Policy Misgeneralization Fundamental: Policies can perform poorly in deployment even if rewards seen during training were perfectly correct. The deployment distribution can always differ from the training and evaluation distributions in real-world settings (Christiano, 2019). Even with a correct reward signal, a policy can learn to competently pursue the wrong goal whenever the true goal is correlated with other events. Shah et al. (2022); Di Langosco et al. (2022) and Hilton et al. (2020) study this type of failure in-depth. Shah et al. 10 (2022) present an example scenario in which a systems trained with RLHF misgeneralizes to pursue the mechanism of reward administration itself instead of the intended goal.
2307.15217#38
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.14984
39
Specifically, to leverage the ability of LLMs to simulate a real user’s interaction behavior, we prompt the model with information regarding the user’s demographic properties, i.e. gender, age, and occupation, in addition to the specific posts received, letting the LLM think like the user and make its decision. By such means, we enable LLM to make predictions regarding the user’s inclination to repost the message or post new content. To summarize, by employing the above approach, we can effectively harness the power of LLMs to predict users’ interaction behavior, taking into account various user and post features. # 4.5 Other Implementation Details The system employs various techniques for utilizing or adapting large language models to the agent- based simulation. For prompting-driven methods, we use either GPT-3.5 API provided by OpenAI1 or # 1https://platform.openai.com/overview 12 a ChatGLM-6B model [10]. For fine-tuning methods, we conduct the tuning based on the open-source ChatGLM model. # 5 Discussions and Open Problems The S3 system, which has been developed, represents an initial endeavor aimed at harnessing the capabilities of large language models. This is to facilitate simulation within the domain of social science. In light of this, our analysis delves further into its application and limitations, along with promising future improvements.
2307.14984#39
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
39
10 (2022) present an example scenario in which a systems trained with RLHF misgeneralizes to pursue the mechanism of reward administration itself instead of the intended goal. Fundamental: Optimal RL agents tend to seek power. RL agents have an incentive to seek power when possible to help them accomplish their goals (Turner, 2021; Turner et al., 2019; Turner and Tadepalli, 2022; Ngo, 2022; Krakovna and Kramar, 2023; Ngo, 2022) Versions of this can emerge from the way that RLHF is typically used to finetune LLMs. For example, a question-answering LLM trained with RLHF would be incentivized to influence human interlocutors in order to avoid conversations about challenging topics. Sycophantic behavior from LLMs offers another example (Perez et al., 2022b). # 3.3.3 Distributional Challenges There are challenges posed by the distribution of outputs produced by the model both before and after training.
2307.15217#39
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.14984
40
In light of this, our analysis delves further into its application and limitations, along with promising future improvements. # 5.1 Application of S3 System Leveraging the powerful capabilities of large language models, this system excels in agent-based simulation. The system has the following applications in the field of social science. • Prediction. Prediction is the most fundamental ability of agent-based simulation. Large language model-based simulation can be utilized to predict social phenomena, trends, and individual behav- iors with historically collected data. For example, in economics, language models can help forecast market trends, predict consumer behavior, or estimate the impact of policy changes. In sociology, these models can aid in predicting social movements, public opinion shifts, or the adoption of new cultural practices. • Reasoning and explanation. During the simulation, each agent can be easily configured, and thus the system can facilitate reasoning and explanation in social science by generating phenomena with different configurations. Comparing the simulation results can provide explain the cause of the specific phenomena. Furthermore, the agent can be observed by prompts which can reflect how a human takes actions in the social environment.
2307.14984#40
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
40
# 3.3.3 Distributional Challenges There are challenges posed by the distribution of outputs produced by the model both before and after training. Tractable: The pretrained model introduces biases into policy optimization. RLHF in LLMs typically begins with a base model that has been pretrained on internet text. This base model is typically used both as the initialization for the RL policy network and the reference model for KL-regularization. Korbak et al. (2022b) formalizes how RL with these KL penalties can be viewed as a form of Bayesian inference with the base model determining the prior. While empirically useful, it causes the base model to significantly influence the final model. Using a base model that has been pretrained on web text is a convenient initialization – not a principled one. Moreover, internet text encodes harmful biases (e.g., about human demographics), which are then inherited by the downstream model (Weidinger et al., 2021). These biases can persist through RLHF training process. For example, if sounding confident and producing correct answers are correlated in the base model, the reward model will learn that sounding confident is good and reinforce this in the policy.
2307.15217#40
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.14984
41
• Pattern discovery and theory construction. With repeated simulation during the extremely less cost compared with real data collection, the simulation process can reveal some patterns of the social network. By uncovering patterns, these models can contribute to the development of new theories and insights. Furthermore, researchers can configure all the agents and the social network environment, based on an assumption or theory, and observe the simulation results. Testing the simulation results can help validate whether the proposed assumption or theory is correct or not. • Policy making. The simulation can inform evidence-based policy-making by simulating and evaluating the potential outcomes of different policy interventions. It can assess the impact of policy changes on various social factors, including individual agents and the social environment. For example, in public health, it can simulate the spread of infectious diseases to evaluate the effectiveness of different intervention strategies. In urban planning, it can simulate the impact of transportation policies on traffic congestion or air pollution, by affecting how the users select public transportation. By generating simulations, these models can aid policymakers in making informed decisions. # Improvement on Individual-level Simulation
2307.14984#41
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
41
Tractable: RL contributes to mode collapse. RL finetuning decreases the diversity of samples produced by a model (Khalifa et al., 2021; Perez et al., 2022a; Glaese et al., 2022; Go et al., 2023) (a phenomenon known as “mode collapse”). OpenAI (2023) found that RLHF finetuning of GPT-4 harmed its calibration on question-answering. Santurkar et al. (2023) found LLMs finetuned with RLHF expressed a narrow distribution of political views. Mode collapse is plausibly due in part to switching from the supervised pretraining objective to an RL objective (Song et al., 2023). RL incentivizes the policy to output high- scoring completions with high probability, rather than with a probability in line with a training distribution. Addressing this is complicated because mode collapse can be beneficial or harmful in different cases. For example, it is desirable if an LLM assistant is 90% sure the answer to a question is “yes”, it is better for the LLM to answer “probably” 100% of the time rather than answering “yes” 90% of the time and
2307.15217#41
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.14984
42
# Improvement on Individual-level Simulation The current design of individual simulation still has several limitations requiring further improvement. First, the agent requires more prior knowledge of user behavior, including how real humankind senses the social environment and makes decisions. In other words, the simulation should encompass an understanding and integration of intricate contextual elements that exert influence on human behavior. Second, while prior knowledge of user behavior is essential, simulations also need to consider the broader context in which decisions are made. This includes factors such as historical events, social conditions, and personal experiences. By enhancing the agent’s capacity to perceive and interpret contextual cues, more precise simulations can be achieved. # Improvement on Population-level Simulation First, it is better to combine agent-based simulation with system dynamics-based methods. 13 Agent-based simulation focuses on modeling individual entities and their interactions, while system dynamics focuses on modeling the behavior of the social complex system as a whole. Through the fusion of these two methodologies, we can develop simulations of heightened comprehensiveness, encompassing both micro-level interactions and macro-level systemic behavior. This integration can provide a more accurate representation of population dynamics, including the impact of individual decisions on the overall system.
2307.14984#42
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.14984
43
Second, we can consider a broader range of social phenomena. This involves modeling various societal, economic, and cultural factors that influence human behavior and interactions. Examples of social phenomena to consider include social networks, opinion dynamics, cultural diffusion, income inequality, and infectious disease spread. By incorporating these phenomena into the simulation, we can better validate the system’s effectiveness and also gain more insights into social simulation. # Improvement on System Architecture Design First, we can consider incorporating other channels for social event information. It is essential to acknowledge that social-connected users are not the sole providers of information for individuals within social networks. Consequently, the integration of supplementary data sources has the potential to enrich the individual simulation. For instance, recommender systems can be integrated to gather diverse information about social events. This integration can help capture a wider range of perspectives and increase the realism of the simulation. Second, the system architecture should consider improving efficiency, which is essential for running large-scale simulations effectively. Optimizing the system architecture and computational processes can significantly enhance the performance and speed of simulations. To this end, techniques such as parallel computing, distributed computing, and algorithmic optimizations can be employed to reduce computational complexity and advance the efficiency of simulation runs. This allows for faster and more extensive exploration of scenarios, thereby enabling researchers to gain insights faster.
2307.14984#43
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
43
# 3.4 Challenges with Jointly Training the Reward Model and Policy RLHF’s dependence on training both a reward model and policy poses two unique problems. Tractable: Joint training induces distribution shifts. Learning both a reward model and a policy is technically challenging – the reward model influences the learned policy, and the policy determines the distribution of the data used to train the reward. On one hand, if the reward model is trained on offline data, it is likely to misgeneralize (Levine et al., 2020). On the other hand, if reward and policy are learned jointly by gathering feedback from policy samples, the system will be prone to “auto-induced distributional shift” (Krueger et al., 2020; Carroll et al., 2022). Features with overestimated rewards will become gradually more present in the feedback data, and features with underestimated rewards will disappear. Thus errors from the reward model can accumulate and become difficult to correct with feedback once the policy stops generating diverse alternatives (Wu et al., 2021a). Tractable: It is difficult to balance efficiency and avoiding overfitting by the policy. The three key steps of RLHF can be performed synchronously, but in practice with LLMs, they are often performed serially. In this case, the reward model will typically be inaccurate off-distribution, which is precisely where the policy 11
2307.15217#43
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.14984
44
Third, it is essential to add an interface for policy intervention. Including an interface that allows policymakers to interact with the simulation can be beneficial. This interface would enable policy- makers to input and test various interventions and policies in a controlled environment. By simulating the potential outcomes of different policy decisions, policymakers can make more informed choices. They can also evaluate the potential impact of their interventions on the simulated population. This feature can facilitate evidence-based decision-making and identify effective strategies. # 6 Conclusion In this paper, we present the S3 system (Social Network Simulation System) as a novel approach aimed at tackling the complexities of social network simulation. By harnessing the advanced capabilities of large language models (LLMs) in the realms of perception, cognition, and behavior, we have established a framework for social network emulation. Our simulations concentrate on three pivotal facets: emotion, attitude, and interactive behaviors. This research marks a significant stride forward in social network simulation, pioneering the integration of LLM-empowered agents. Beyond social science, our work possesses the potential to stimulate the development of simulation systems across diverse domains. Employing this methodology enables researchers and policymakers to attain profound insights into intricate social dynamics, thereby facilitating informed decision-making and effectively addressing various societal challenges. # References
2307.14984#44
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
44
11 will learn to go (Gao et al., 2022; Levine et al., 2020). This is usually solved by obtaining fresh preference labels after a certain number of iterations of policy training. Appropriately setting this hyperparameter is important. Too low and information in the preference labels is wasted; too high and the policy navigates to unreliable regions of the reward model (McKinney et al., 2023; Christiano et al., 2017). Without a labeled validation set in the regions the policy is exploring, it is difficult to detect reward over-optimization during training. Helpful approaches might include measuring KL-shift (Gao et al., 2022) or tracking the amount of disagreement in an ensemble of reward models. # 4 Incorporating RLHF into a Broader Framework for Safer AI
2307.15217#44
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.14984
45
# References [1] Gati V Aher, Rosa I Arriaga, and Adam Tauman Kalai. Using large language models to simulate multiple humans and replicate human subject studies. In International Conference on Machine Learning, pages 337–371. PMLR, 2023. [2] Robert Axelrod. Advancing the art of simulation in the social sciences. In Simulating social phenomena, pages 21–40. Springer, 1997. [3] Fabian Baumann, Philipp Lorenz-Spreen, Igor M Sokolov, and Michele Starnini. Modeling echo chambers and polarization dynamics in social networks. Physical Review Letters, 124(4):048301, 2020. 14 [4] Fabian Baumann, Philipp Lorenz-Spreen, Igor M Sokolov, and Michele Starnini. Emer- gence of polarized ideological opinions in multidimensional topic spaces. Physical Review X, 11(1):011012, 2021.
2307.14984#45
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
45
# 4 Incorporating RLHF into a Broader Framework for Safer AI Because of the challenges surveyed in Section 3, relying heavily on RLHF for developing safe AI poses risks. While RLHF is useful, it does not solve the fundamental challenges of developing human-aligned AI. More generally, no single strategy should be treated as a comprehensive solution. A better approach is defense in depth: multiple safety measures with uncorrelated failure modes. This is akin to assembling multiple layers of Swiss cheese—each has holes, but when layered can compensate for each other’s failures (Hendrycks et al., 2021). While this type of approach is promising, it also comes with problems. For example, many of the challenges in Section 3 are not unique to RLHF, so it may be hard to find safety methods with uncorrelated failures. In this section, we discuss approaches that can be used to better understand (Section 4.1), improve on (Section 4.2), and complement (Section 4.3) RLHF in various ways as part of a broader agenda for AI safety. # 4.1 Frameworks for Better Understanding RLHF Although RLHF is becoming more widely used, there remain open questions about what factors are at play within it and how they influence the overall outcome. Here, we discuss approaches to address challenges for RLHF.
2307.15217#45
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.14984
46
[5] Paul Bratley, Bennett L Fox, and Linus E Schrage. A guide to simulation, 1987. [6] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. [7] Gary Charness and Matthew Rabin. Understanding social preferences with simple tests. The quarterly journal of economics, 117(3):817–869, 2002. [8] Bastien Chopard and Michel Droz. Cellular automata. Modelling of Physical, pages 6–13, 1998. [9] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
2307.14984#46
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
46
Psychology and human-computer interaction. Many of the open questions with RLHF involve the dynamics at play between humans and AI. It remains a challenge to understand the conditions which best allow for safe, reliable human-computer interaction. Specifically, it is unclear what type of feedback (or combination thereof) is optimal for learning human goals, precisely how biases harm the quality of feedback, and how to best select and train human evaluators. As discussed in Section 3, human desires are difficult to express with a reward function (Skalse and Abate, 2022b; Bowling et al., 2023; Vamplew et al., 2022). Further work may be valuable toward inferring what beliefs humans are operating under and either asking for feedback while taking into account human uncertainty (Biyik et al., 2019) or correcting for human biases (Reddy et al., 2019; 2020; Chan et al., 2019; Tian et al., 2023). Reward modeling systems must also take advantage of techniques that distinguish between humans with different levels of expertise (Daniels-Koch and Freedman, 2022), confidence (Zhang et al., 2021), or noisiness (Barnett et al., 2023).
2307.15217#46
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.14984
47
[10] Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. GLM: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320–335, Dublin, Ireland, May 2022. Association for Computational Linguistics. [11] Rohan Anil et al. Palm 2 technical report, 2023. [12] James Flamino, Alessandro Galeazzi, Stuart Feldman, Michael W Macy, Brendan Cross, Zhenkun Zhou, Matteo Serafino, Alexandre Bovet, Hernán A Makse, and Boleslaw K Szyman- ski. Political polarization of news media and influencers on twitter in the 2016 and 2020 us presidential elections. Nature Human Behaviour, pages 1–13, 2023. [13] Jay W Forrester. System dynamics and the lessons of 35 years. In A systems-based approach to policymaking, pages 199–240. Springer, 1993. [14] Nigel Gilbert and Klaus Troitzsch. Simulation for the social scientist. McGraw-Hill Education (UK), 2005.
2307.14984#47
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
47
Sociology and social choice. AI alignment must address not only individuals’ perspectives, but also the norms, expectations, and values of affected groups. Some works have begun to assess whether LLMs can be used to facilitate agreement between different humans (Bakker et al., 2022) and to codify the broad- ranging principles under which deployment of AI systems for public good can be assessed (Floridi and Cowls, 2022; Sartori and Theodorou, 2022). The majority-rule problem with RLHF can also be improved by algorithms that explicitly model multiple evaluators (Gordon et al., 2021; Davani et al., 2022; Daniels-Koch and Freedman, 2022; Gordon et al., 2022; Barnett et al., 2023), that tune models to individuals (Kumar et al., 2021), or that use more sophisticated aggregation strategies (Noothigattu et al., 2018). However, none of these approaches can solve the fundamental problem of how an AI system cannot be aligned to multiple groups of humans who hold conflicting viewpoints (Dobbe et al., 2021). Many societies, however, confront this fundamental issue regularly. For example, democracies
2307.15217#47
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.14984
48
[14] Nigel Gilbert and Klaus Troitzsch. Simulation for the social scientist. McGraw-Hill Education (UK), 2005. [15] Perttu Hämäläinen, Mikke Tavast, and Anton Kunnari. Evaluating large language models in generating synthetic hci research data: a case study. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–19, 2023. [16] Marilena Hohmann, Karel Devriendt, and Michele Coscia. Quantifying ideological polarization on a network using generalized euclidean distance. Science Advances, 9(9):eabq2044, 2023. [17] John J Horton. Large language models as simulated economic agents: What can we learn from homo silicus? Technical report, National Bureau of Economic Research, 2023. [18] Peter Kolesar and Warren E Walker. A simulation model of police patrol operations: program description. 1975. [19] Lik-Hang Lee, Tristan Braud, Pengyuan Zhou, Lin Wang, Dianlei Xu, Zijun Lin, Abhishek Kumar, Carlos Bermejo, and Pan Hui. All one needs to know about metaverse: A complete survey on technological singularity, virtual ecosystem, and research agenda. arXiv preprint arXiv:2110.05352, 2021.
2307.14984#48
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
48
humans who hold conflicting viewpoints (Dobbe et al., 2021). Many societies, however, confront this fundamental issue regularly. For example, democracies seek to reflect social preferences by soliciting the feedback of individuals. These systems generally fail to align diverse preferences yet tend to be more acceptable than less-democratic alternatives. As such, it is important to analyze RLHF from the lens of social choice theory (Sen, 1986) and work to understand whether the means by which it aggregates preferences is normatively acceptable.
2307.15217#48
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.14984
49
[20] Jiazhen Liu, Shengda Huang, Nathaniel M Aden, Neil F Johnson, and Chaoming Song. Emer- gence of polarization in coevolving networks. Physical Review Letters, 130(3):037401, 2023. [21] Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. P- tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61–68, Dublin, Ireland, May 2022. Association for Computational Linguistics. [22] Philipp Lorenz-Spreen, Lisa Oswald, Stephan Lewandowsky, and Ralph Hertwig. A systematic review of worldwide causal and correlational evidence on digital media and democracy. Nature human behaviour, 7(1):74–101, 2023. [23] Stefan Luding. Information propagation. Nature, 435(7039):159–160, 2005. [24] Lawrence C Marsh and Meredith Scovill. Using system dynamics to model the social security system. In NBER Workshop on Policy Analysis with Social Security Research Files, pages 15–17, 1978. 15
2307.14984#49
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
49
12 Assistance games. Assistance games, such as cooperative inverse RL (CIRL) (Hadfield-Menell et al., 2016), provide a framework to analyze algorithms like RLHF. They offer a mathematical model to evaluate different design decisions in the communication of preferences to learning systems. In an assistance game, a human and an agent act together in the environment. Both seek to optimize the human’s latent reward function, while only the human can directly query this reward function. In this model, querying the human is simply an additional action that the robot can take, and it is possible to study different querying strategies or profiles. Studying RLHF as an assistance game emphasizes the performance of the human-robot team. This might suggest alternative preference elicitation methods. Two examples are using active reward learning to determine when to collect feedback and which feedback to request first (Sadigh et al., 2017), and leveraging dialogue models to learn desired feedback-seeking patterns (Krasheninnikov et al., 2022). Of particular interest is understanding the consistency and convergence properties of RLHF, the impact of different error patterns from raters, and the effect of different rates of feedback.
2307.15217#49
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.14984
50
15 [25] Dennis L Meadows, William W Behrens, Donella H Meadows, Roger F Naill, Jørgen Randers, and Erich Zahn. Dynamics of growth in a finite world. Wright-Allen Press Cambridge, MA, 1974. [26] Daniele Notarmuzi, Claudio Castellano, Alessandro Flammini, Dario Mazzilli, and Filippo Radicchi. Universality, criticality and complexity of information propagation in social media. Nature communications, 13(1):1308, 2022. [27] OpenAI. Gpt-4 technical report, 2023. [28] Joon Sung Park, Joseph C O’Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023. [29] Jiezhong Qiu, Jian Tang, Hao Ma, Yuxiao Dong, Kuansan Wang, and Jie Tang. Deepinf: Social influence prediction with deep learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’18, page 2110–2119, New York, NY, USA, 2018. Association for Computing Machinery.
2307.14984#50
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
50
Bayesian inference. Finetuning an LLM using RL with KL penalties on the differences between the pretrained model can be understood as a form of Bayesian inference: conditioning a prior (base LLM) on evidence about the desirable behavior of an LLM provided by the reward model (Korbak et al., 2022b). This perspective on RLHF separates the modeling problem (defining a target distribution specifying the desired behavior of an LLM) and the inference problem (approximating that target distribution) (Korbak et al., 2022a; Go et al., 2023). This can aid in answering questions about how the prior influences the outcome of RLHF. The typical target distribution of RLHF (a Boltzmann distribution) is a particular design choice and other distributions may address some of its limitations by, for example, differently fitting distributional preferences (Khalifa et al., 2021). Similarly, RLHF’s inference algorithm (RL with KL penalties; equivalent to a variational inference approach (Korbak et al., 2022b)) could be replaced by a particular sampling strategy (e.g., rejection sampling or best-of-n sampling).
2307.15217#50
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.14984
51
[30] William Samuelson and Richard Zeckhauser. Status quo bias in decision making. Journal of risk and uncertainty, 1:7–59, 1988. [31] Fernando P Santos, Yphtach Lelkes, and Simon A Levin. Link recommendation algorithms and dynamics of polarization in online social networks. Proceedings of the National Academy of Sciences, 118(50):e2102141118, 2021. [32] Joseph A Schafer. Spinning the web of hate: Web-based hate propagation by extremist organizations. Journal of Criminal Justice and Popular Culture, 2002. [33] Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. Effects of age and gender on blogging. In AAAI spring symposium: Computational approaches to analyzing weblogs, volume 6, pages 199–205, 2006. [34] Peter D Spencer. The effect of oil discoveries on the british economy—theoretical ambiguities and the consistent expectations simulation approach. The Economic Journal, 94(375):633–644, 1984.
2307.14984#51
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
51
Worst-case behavior. While RLHF seems to improve the average performance of a system, it is not clear what effects it has on worst-case behavior. It was not designed to make systems adversarially robust, and empirical vulnerabilities of systems trained with RLHF have been demonstrated with jailbreaks and prompt injection attacks (Willison, 2023; Albert, 2023; Oneal, 2023; Li et al., 2023a; Wolf et al., 2023; Liu et al., 2023; Rao et al., 2023; Wei et al., 2023; Shen et al., 2023). As a consequence, it would be valuable to better understand the worst-case behaviors of RLHF systems, potentially through the lenses of theoretical properties (Wolf et al., 2023; El-Mhamdi et al., 2022), decision theory (Casper, 2020), adversarial attacks (Perez et al., 2022a;b; Casper et al., 2023b; Ziegler et al., 2022; Carlini et al., 2023b), or rigorous evaluations (ARC, 2022; OpenAI, 2023; Shevlane et al., 2023). # 4.2 Addressing Challenges with RLHF
2307.15217#51
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.14984
52
[35] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [36] Klaus G Troitzsch. Social science microsimulation. Springer Science & Business Media, 1996. [37] Jianghao Wang, Yichun Fan, Juan Palacios, Yuchen Chai, Nicolas Guetta-Jeanrenaud, Nick Obradovich, Chenghu Zhou, and Siqi Zheng. Global evidence of expressed sentiment alterations during the covid-19 pandemic. Nature Human Behaviour, 6(3):349–358, 2022. [38] Jiarong Xie, Fanhui Meng, Jiachen Sun, Xiao Ma, Gang Yan, and Yanqing Hu. Detecting and modelling real percolation and phase transitions of information on social media. Nature Human Behaviour, 5(9):1161–1168, 2021.
2307.14984#52
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
52
# 4.2 Addressing Challenges with RLHF Just as RLHF has challenges involving feedback (Section 3.1), the reward model (Section 3.2), and the policy (Section 3.3), there are various methods that can replace or combine with parts of the RLHF pipeline to address each of these types of challenges. Figure 3 outlines these methods. See also Wang et al. (2023) for a survey of methods for aligning LLMs. # 4.2.1 Addressing Challenges with Human Feedback
2307.15217#52
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.14984
53
[39] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414, 2022. [40] Jing Zhang, Jie Tang, Juanzi Li, Yang Liu, and Chunxiao Xing. Who influenced you? predicting retweet via social influence locality. ACM Trans. Knowl. Discov. Data, 9(3), apr 2015. 16
2307.14984#53
S3: Social-network Simulation System with Large Language Model-Empowered Agents
Social network simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena explanation, and policy-making support, among others. In this work, we harness the formidable human-like capabilities exhibited by large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S$^3$ system (short for $\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to the widely employed agent-based simulation paradigm, we employ prompt engineering and prompt tuning techniques to ensure that the agent's behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, attitude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We anticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
http://arxiv.org/pdf/2307.14984
Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li
cs.SI
null
null
cs.SI
20230727
20231019
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2210.02414" }, { "id": "2304.03442" }, { "id": "2110.05352" } ]
2307.15217
53
# 4.2.1 Addressing Challenges with Human Feedback Providing feedback with AI assistance. One way to amplify the abilities of humans is to have AI tools assist in generating feedback. Engineering prompts for an AI system and using it to automate feedback can substantially increase practicality and cost-effectiveness due to reduced reliance on humans. Nonetheless, AI-generated feedback still fundamentally depends on humans because (1) the models providing feedback are trained on human-generated data, and (2) humans control prompts and the process of incorporating feedback. There are several notable examples of AI-generated language feedback (Bai et al., 2022b; Saunders et al., 2022; Ye et al., 2023; Kim et al., 2023; Akyürek et al., 2023; Madaan et al., 2023; Chen et al., 2023; Gilardi et al., 2023; Lee et al., 2023) with research agendas like Recursive Reward Modeling (Leike et al., 2018) and AI Safety via debate (Irving et al., 2018; Du et al., 2023). However, AI-generated feedback has drawbacks. Humans often disagree with AI feedback. The rate of human/AI disagreement will vary by task, but Perez et al. (2022b), Casper et al. (2023b), and Lee et al. (2023) found this to happen up to 10%, 46%, and 22% of 13
2307.15217#53
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
54
13 Addressing Challenges with RLHF, 84.2 2 Human Feedback 84.2.1 (6) Reward Model, §4.2.2 ra Policy, 84.2.3 Al assistance | Direct human oversight | Aligning LLMs during pretraining | Fine-grained feedback | Multi-objective oversight | Supervised learning | Process supervision | Maintaining uncertainty Translating language to reward Learning from demonstrations Figure 3: Strategies that can be used to address various problems with RLHF. Each approach is discussed in Section 4.2. the time respectively in different experiments. Machines can also exhibit correlated failure modes not found in humans, such as vulnerabilities to some adversarial attacks. The extent to which AI feedback is a viable way to safely augment human feedback remains uncertain. However, it cannot theoretically be a comprehensive solution to AI alignment due to the bootstrapping problem behind ensuring the feedback-providing AI is aligned.
2307.15217#54
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
55
Fine-grained feedback. Many problems with feedback involve difficulty conveying precise information via the feedback signal (Section 3.1.4). To address this, Wu et al. (2023) and Cabi et al. (2019) use feedback on specific portions of examples and Wu et al. (2023) use feedback with respect to different goals of the model (e.g., correctness, relevance). This might improve the quality of the learned reward models at the cost of human feedback being more expensive to provide. Fine-grained feedback is not yet well studied nor widely adopted, so additional work to understand its advantages and feasibility will be valuable. Process-based supervision. One challenge with training AI systems to solve problems is the difficulty of supervising performance on multi-step procedures. In RL, rewards can be very sparse for such problems. To address this, some works have trained LLMs to better solve multi-step math problems with process supervision (Uesato et al., 2022; Lightman et al., 2023).
2307.15217#55
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
56
Translating natural language specifications into a reward model. Many issues with RLHF arise due to the difficulty of fitting a reward function using some constrained type of feedback. An alternative approach can be to generate a reward signal more directly from natural language directions, bypassing the need for feedback on examples. This approach could resemble a technique used by Bai et al. (2022b) which involved using prompts to guide an AI assistant to identify responses that violated certain user-defined specifications. Moreover, Luketina et al. (2019) surveys other possible techniques to accomplish this goal in non-LLM settings.
2307.15217#56
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
57
Learning rewards from demonstrations. An alternative approach to learning a reward model, known as inverse reinforcement learning (IRL) (Ng et al., 2000; Ramachandran and Amir, 2007; Ziebart et al., 2008), involves humans providing demonstrations instead of offering feedback on ones generated by the model. Jeon et al. (2020) and Bıyık et al. (2022) propose systematic ways of combining demonstrations, preferences, and possibly other types of human feedback to learn reward functions. While demonstrations carry rich information and avoid the need to have a system learn from its own generations, they are often more difficult to gather because they require higher effort and expertise to perform the task. Additionally, the quality of demonstrations is limited by the talent of whatever expert is providing them, which warrants more research on learning from suboptimal human demonstrations (e.g., Brown et al. (2019); Zhang et al. (2021)). 14 # 4.2.2 Addressing Challenges with the Reward Model Using direct human oversight. Although learning a reward model is efficient, it might be necessary to directly provide rewards (MacGlashan et al., 2017) for RL training in certain safety-critical situations.
2307.15217#57
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
58
Multi-objective oversight. Richer multi-objective signals that rate outputs on multiple objectives (Vam- plew et al., 2022) could lead to more flexible oversight. Current reward models assume that expert feedback is drawn from an underlying unimodal reward function (Barnett et al., 2023; Myers et al., 2021). But this is overly simplistic (Skalse and Abate, 2022b; Bowling et al., 2023). For instance, it can lead to a reward model that merely captures the preferences of the majority, and suppresses the preferences of minorities as noise. Using constraints (Malik et al., 2021; Lindner et al., 2023) or reward models that account for the diversity of preferences by assuming underlying reward functions to be multimodal (Myers et al., 2021; Bakker et al., 2022; Barnett et al., 2023; Siddique et al., 2023; Bhatia et al., 2020) can help mitigate this issue. Multi- objective oversight can also be useful for steering systems toward desired balances between competing values (e.g., helpfulness and harmlessness).
2307.15217#58
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
59
Maintaining uncertainty over the learned reward function. Given the challenges of accurately learn- ing the appropriate reward function, several studies have emphasized the importance of taking uncertainty in the learned functions into account. Yue et al. (2023) and Liang et al. (2022b) tackle this by having the policy avoid types of states unseen by the reward model. Using an ensemble of reward functions has also been used to address these challenges (Christiano et al., 2017), demonstrating that this approach can enhance the diversity of text output (Rame et al., 2023) and its applicability for active learning (Gleave and Irving, 2022). Other strategies can include forms of risk-aversion (Hadfield-Menell et al., 2017) or handling uncertainty with a safe “shield” policy (Jansen et al., 2018; Srinivasan et al., 2020; Cohen and Hutter, 2020). # 4.2.3 Addressing Challenges with the Policy
2307.15217#59
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
60
# 4.2.3 Addressing Challenges with the Policy Aligning LLMs during pretraining. RLHF in LLMs typically begins by pretraining the LLM on internet text which includes a large amount of undesirable content. Korbak et al. (2023) argue that it can be more effective to use human feedback during pretraining by using a reward model to filter, weight, or annotate pretraining data. This also simplifies the process of aligning models by having them exhibit desirable behaviors from the outset rather than having them learn undesirable behavior and then attempt to unlearn it during finetuning.
2307.15217#60
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
61
Aligning LLMs through supervised learning. Several techniques for aligning LLMs with human pref- erences obtain results competitive with RLHF by using supervised learning to complement (Ramamurthy et al., 2022) or replace RL. The simplest variant of this is to perform standard supervised learning on well- curated data. Curation can involve filtering out bad demonstrations (Gehman et al., 2020; Welbl et al., 2021; Dong et al., 2023), compiling a small set of good demonstrations (Solaiman and Dennison, 2021; Sanh et al., 2022; Ibarz et al., 2018; Stiennon et al., 2020; Chung et al., 2022; Bıyık et al., 2022; Zhou et al., 2023), or generating good demonstrations using an LLM, e.g., after conditioning human feedback provided in natural language (Scheurer et al., 2022; 2023; Chen et al., 2023; Xu et al., 2023b). A different family of methods augments the language modeling objective to utilize feedback provided by the reward model (Korbak et al., 2023; Yuan et al., 2023; Rafailov et al., 2023). This last setting shares similarities with offline RL, which focuses on training an optimal policy using demonstrations annotated with rewards (Levine et al., 2020; Snell et al., 2022; Hu et al., 2023).
2307.15217#61
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
62
# 4.3 RLHF is Not All You Need: Complementary Strategies for Safety Other technical approaches to AI safety should be studied and implemented alongside RLHF. Establishing trust with AI systems should be approached with a combination of principled design choices, rigorous testing, interpretability, verification, and theoretical guarantees where possible (Leike et al., 2018). See also Critch and Krueger (2020), Hubinger (2020), Hendrycks et al. (2021), and Ngo (2022) for additional overviews of strategies for building safer AI. Robustness. As discussed in Section 3.3, models trained with RLHF can still exhibit undesired behavior due to distributional shifts between training and deployment. For example, adversarially engineered user 15 inputs cause an LLM to output harmful text. To mitigate this problem, developers should use tools to generate inputs which result in undesired behavior and train against these adversarial examples (Zhang and Li, 2019; Ziegler et al., 2022; Perez et al., 2022a; Casper et al., 2023b). Anomaly detection techniques (Omar et al., 2013) can also be useful for flagging abnormal inputs likely to trigger bad behavior. Ensuring the security of important AI training runs against malicious human evaluators and/or outside cybersecurity threats will also be valuable.
2307.15217#62
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
63
Risk assessment and auditing. Although training processes should be crafted to produce models that are safe by design, evaluations are another layer of defense. Passing an evaluation is not proof of safety, but as is the case in almost every safety-critical industry, rigorous evaluations of capabilities and risks helps to spot hazards and establish trust. In practice, this should involve both in-house and second-party evaluations (OpenAI, 2023; ARC, 2022; Perez et al., 2022b). As with adversarial training for robustness, the development of improved red teaming techniques will be important (Perez et al., 2022a; Casper et al., 2023b).
2307.15217#63
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
64
Interpretability and model editing. Generating human-understandable explanations for the behavior of AI systems is currently an unsolved problem. Progress in explainability and interpretability could help verify hypotheses about how models make decisions (Geiger et al., 2023), including whether the decision-making process is trustworthy. In this way, it could be possible to gain confidence that models will (or will not) behave in a safe way without necessarily conducting extensive testing of the models (Jacovi et al., 2021). Red-teaming can also be complemented by interpretability techniques (Rastogi et al., 2023; Räuker et al., 2023), especially for purposes of identifying adversarial inputs (Ziegler et al., 2022; Casper et al., 2023c;a) or anomalous inputs (Pang et al., 2021). In another direction, better understanding the internal mechanisms of models can aid in directly editing model weights or intervening on internal activations in order to improve truthfulness (Li et al., 2023b), modify a model’s factual knowledge (Meng et al., 2023; 2022; Hernandez et al., 2023; Hase et al., 2023), or otherwise steer model behavior (Cui et al., 2022). # 5 Governance and Transparency
2307.15217#64
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
65
# 5 Governance and Transparency Social scientists and policymakers have increasingly focused on the need for governance frameworks to develop and deploy AI systems responsibly. Across historical contexts, a hallmark of mature scientific fields is the open sharing of research findings (Shapin and Schaffer, 2011) to allow experts to understand progress (Gilbert and Loveridge, 2021). Below we overview components of an RLHF governance agenda, including outstanding questions and risk dimensions.
2307.15217#65
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
66
Incentives and requirements for safety. Competition between labs can generate harmful race dynamics (Dafoe, 2018) because of tradeoffs between competitiveness and caution. This suggests a role for governance in promoting a healthier environment for safe AI research, development, and deployment (Dafoe, 2018; Perry and Uuk, 2019; Falco et al., 2021; Cihon, 2019; Anderljung et al., 2023). Governance in this form could involve mandates for independent auditing, evaluations, and certification (Shavit, 2023; Mökander et al., 2023; ARC, 2022; Hadfield and Clark, 2023; Shevlane et al., 2023); monitoring for post-deployment problems (Hendrycks and Gimpel, 2016); influence over resources including hardware and data (Brief, 2020; Chan et al., 2023a); and prohibiting deployment unless critical standards are met, as in the case of the U.S. Food and Drug Administration’s oversight of clinical trials for testing potential new treatments (Junod, 2008).
2307.15217#66
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
67
Transparency and auditing. A sustained commitment to transparency would make the existing RLHF research environment more robust from a safety standpoint. First, the disclosure of some details behind large RLHF training runs would clarify a given organization’s norms for model scrutiny and safety checks. Second, increased transparency about known efforts to mitigate risks could improve safety incentives and suggest methods for external stakeholders to hold companies accountable. Third, and most relevant for the present paper, transparency would improve the AI safety community’s understanding of RLHF and support the ability to track technical progress on its challenges. Some level of disclosure is a precondition to evaluate the viability of the technical RLHF safety agenda over time and allow for community contribution to it. For all of these reasons, working to incorporate transparency standards into an AI governance framework will be important (Larsson and Heintz, 2020; Anderljung et al., 2023). It is possible that public disclosure of details critical to the development of model capabilities might lead to the unwanted proliferation of AI technologies 16 Transparency / Auditing Items for RLHF 2 Human Feedback a: Reward Model c< Policy Pretraining | Loss function | Evaluation and results Selection/training of humans | Evaluation and results Selection of examples Type(s) of feedback used ZN Systemic Safety | Internal and external auditing | Quality-assurance measures | Report on expected risks | Monitoring and handling failures |
2307.15217#67
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
68
Figure 4: Details behind an implementation of RLHF that, if disclosed, could be indicative of risks. See Section 5 for a complete discussion. Companies using RLHF to train models for high-stakes or safety-critical applications should maintain transparency with the public and/or auditors about key details of their approach. that could be misused. However, detailing safety measures will often not require divulging implementable details, and when it does, private disclosure to second-party auditors (Mökander et al., 2023; ARC, 2022; Hadfield and Clark, 2023; Shevlane et al., 2023) offers a solution. As more specific policy prescriptions are beyond our scope, we encourage elaboration on these topics as part of a future research agenda. Below, however, we outline specific types of details that, if disclosed, could be indicative of risks and should be accounted for when auditing AI systems developed using RLHF. See also Figure 4. Human feedback details: • A description of the pretraining process including details about what data was used to make apparent possible biases that pretraining can cause. • How human evaluators were selected and trained to provide information about risks of evaluators being malicious, unrepresentative, or incapable.
2307.15217#68
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
69
• How human evaluators were selected and trained to provide information about risks of evaluators being malicious, unrepresentative, or incapable. • The process by which examples were selected to obtain feedback to invite scrutiny about their representativeness and whether sufficient adversarial training was used. If examples were crowdsourced from a publicly-available application, details about what measures were taken to avoid data poisoning attacks should be provided. • The type(s) of human feedback used (e.g., binary comparisons, scalar feedback, etc.) to suggest what risks might be caused by insufficiently abundant or rich feedback. • A report on measures taken for quality assurance in feedback collection and inter-rater consistency to ensure that effective quality control measures were taken. Reward model details: • The loss function used to fit the reward model and how disagreement was modeled (e.g., as noise) to help with analyzing the degree of misspecification when fitting the reward model. • A report on reward model evaluation and results to suggest possible problems from a mis- aligned reward model. The evaluation should involve red teaming. 17 Policy details: • A report on policy evaluation and results to suggest possible troubles from a misaligned policy. The evaluation should involve red teaming and include assessment for risky capabilities (e.g., the ability to deceive a human). Systemic safety measures • A report on internal and external audits and red teaming to ensure accountability and disclose risks that are identified.
2307.15217#69
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
70
Systemic safety measures • A report on internal and external audits and red teaming to ensure accountability and disclose risks that are identified. • A report on expected risks and anticipated failure modes to ensure accountability. • Plans for monitoring and correcting failures that emerge to support post-deployment safety. How these types of risks should be documented remains an area of active work in AI governance. Similar questions have been asked in an investigation by the US Federal Trade Commission into OpenAI (FTC, 2023) but in response to problems with ChatGPT rather than proactively. Salient documentation proposals focus on regular reporting of reward components (Gilbert et al., 2022) and the ability to compare the capabilities of language models according to standard benchmarks (Liang et al., 2022a). For the longer term, incorporating beneficial standards for safety and transparency into norms and regulations affecting AI is an ongoing challenge.
2307.15217#70
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
71
Concerns for social and economic equity. Although this paper has focused on technical challenges with RLHF, there are social and economic ones as well which governance and industry should work to address. For example, OpenAI has paid Kenyan knowledge workers at a rate of less than $2 USD per hour (Perrigo, 2023) for work which was mentally and emotionally demanding (Hao, 2023). Human subjects used in RLHF research should not be systematically selected simply for their availability or low cost (National Commission for the Protection of Human Subjects, 1978). Costs, benefits, and influence over RLHF models should be equitably distributed across different communities (Whittlestone et al., 2021; Eloundou et al., 2023). There is an additional possibility that powerful AI systems will be highly profitable and serve to concentrate large amounts of wealth and power into the hands of a few (O’Keefe et al., 2020; Chan et al., 2023b). Thus, policies that address inequalities and protect vulnerable populations (e.g. impacted communities, whistleblowers) will be increasingly important. # 6 Discussion
2307.15217#71
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
72
# 6 Discussion While some problems with RLHF are tractable, others are fundamental. Technical progress in some respects is tractable, and this room for progress should be seen as a cause for concerted work and optimism. Even some of the fundamental problems that we overview can be alleviated with improved methodology even though they cannot be fully solved by RLHF. However, the fundamental nature of these problems requires that they be avoided or compensated for with non-RLHF approaches. Hence, we emphasize the importance of two strategies: (1) evaluating technical progress in light of the fundamental limitations of RLHF and other methods, and (2) addressing the sociotechnical challenges of aligning to human values by committing to both defense-in-depth safety measures and openly sharing research findings with the wider scientific community.
2307.15217#72
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
73
RLHF = Rehashing Lessons from Historical Failures? RLHF offers new capabilities but faces many old problems. Its use by Christiano et al. dates to 2017, and the individual components of it (preference elicitation, fitting a reward model, and policy optimization) have a history of technical and fundamental challenges in the fields of human-computer interaction and AI safety. In 2023, RLHF was described by the first author of Christiano et al. (2017) as a “basic solution” intended to make it easier to “productively work on more challenging alignment problems” (Christiano, 2023).3 Some challenges and questions that we have 3Christiano (2023) mentions debate (Irving et al., 2018) and recursive reward modeling (Leike et al., 2018) as examples of ‘more challenging alignment problems.’ See also an outline of proposals in Hubinger (2020). 18
2307.15217#73
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
74
18 covered are rather unique to RLHF such as ones involving jointly training the reward model and policy (Section 3.4). However, many other problems are instances of broader ones in machine learning such as challenges with RL policies (Section 3.3). Others still are fundamental problems with AI alignment such as determining whose values are encoded into AI in a diverse society of humans (Section 3.2.1). The successes of RLHF should not obfuscate its limitations or gaps between the framework under which it is studied and real-world applications (see Appendix A). An approach to AI alignment that relies on RLHF without additional techniques for safety risks doubling-down on flawed approaches to AI alignment. Thus, it will be important to continue working to better understand RLHF while respecting its limitations.
2307.15217#74
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
75
Moving forward. RLHF has clear advantages for aligning AI systems with human goals. As a result, it has been key to the development of state-of-the-art LLMs and will likely continue to play a major role in modern AI. However, its use and influence should be accompanied by a commensurate research effort to better understand RLHF and address its flaws. Because it optimizes for human approval, RLHF in particular demands a special type of caution because many of its failures will actively tend to be ones that humans struggle to notice. It will be important to approach RLHF cautiously and work to incorporate it into a more holistic framework (Khlaaf, 2023) for safer AI with multiple layers of protection from failures (Hendrycks et al., 2021). Because some of the challenges with RLHF are fundamental to the AI alignment problem itself, moving forward will require confronting the basic choices and assumptions behind any given approach to aligning AI and who controls it (Dobbe et al., 2021). Moving forward, we urge that those working to develop advanced LLMs using RLHF both contribute toward resolving its open challenges and maintain transparency about the details of their approach to safety and any anticipated risks. # Contributions Stephen Casper and Xander Davies served as the central writers and organizers.
2307.15217#75
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
76
# Contributions Stephen Casper and Xander Davies served as the central writers and organizers. Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Xin Chen, Dmitrii Krasheninnikov, Lauro Langosco, and Peter Hase contributed to writing and planning the paper. Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, and Dylan Hadfield-Menell served as advisors. # Acknowledgements We thank Sam Bowman, Adam Jermyn, Ethan Perez, Alan Chan, Gabriel Recchia, Robert Kirk, and Nathan Lambert for their helpful feedback. This work was facilitated in part by the Harvard AI Safety Team and MIT AI Alignment. 19 # References
2307.15217#76
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
77
19 # References Afra Feyza Akyürek, Ekin Akyürek, Aman Madaan, Ashwin Kalyan, Peter Clark, Derry Wijaya, and Niket Tandon. Rl4f: Generating natural language feedback with reinforcement learning for repairing model outputs. arXiv preprint arXiv:2305.08844, 2023. Alex Albert. Jailbreak chat. 2023. URL https://www.jailbreakchat.com/. Susan Amin, Maziar Gomrokchi, Harsh Satija, Herke van Hoof, and Doina Precup. A survey of exploration methods in reinforcement learning. arXiv preprint arXiv:2109.00157, 2021. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016.
2307.15217#77
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
78
Markus Anderljung, Joslyn Barnhart, Jade Leung, Anton Korinek, Cullen O’Keefe, Jess Whittlestone, Sha- har Avin, Miles Brundage, Justin Bullock, Duncan Cass-Beggs, Ben Chang, Tantum Collins, Tim Fist, Gillian Hadfield, Alan Hayes, Lewis Ho, Sara Hooker, Eric Horvitz, Noam Kolt, Jonas Schuett, Yonadav Shavit, Divya Siddarth, Robert Trager, and Kevin Wolf. Frontier ai regulation: Managing emerging risks to public safety, 2023. Anthropic. Introducing claude, 2023. URL https://www.anthropic.com/index/introducing-claude. ARC. Arc evals, 2022. URL https://evals.alignment.org/. Dilip Arumugam, Jun Ki Lee, Sophie Saskin, and Michael L Littman. Deep reinforcement learning from policy-dependent human feedback. arXiv preprint arXiv:1902.04257, 2019. Hui Bai. Artificial Intelligence Can Persuade Humans. 2023.
2307.15217#78
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
79
Hui Bai. Artificial Intelligence Can Persuade Humans. 2023. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022b. Andrea Bajcsy, Dylan P Losey, Marcia K O’Malley, and Anca D Dragan. Learning from physical human corrections, one feature at a time. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pages 141–149, 2018.
2307.15217#79
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
80
Michiel Bakker, Martin Chadwick, Hannah Sheahan, Michael Tessler, Lucy Campbell-Gillingham, Jan Bal- aguer, Nat McAleese, Amelia Glaese, John Aslanides, Matt Botvinick, et al. Fine-tuning language models to find agreement among humans with diverse preferences. Advances in Neural Information Processing Systems, 35:38176–38189, 2022. Peter Barnett, Rachel Freedman, Justin Svegliato, and Stuart Russell. Active reward learning from multiple teachers. arXiv preprint arXiv:2303.00894, 2023. Connor Baumler, Anna Sotnikova, and Hal Daumé III. Which examples should be multiply annotated? active learning when annotators may disagree. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10352–10371, 2023. James Bennett, Stan Lanning, et al. The netflix prize. In Proceedings of KDD cup and workshop, volume 2007, page 35. New York, 2007. Kush Bhatia, Ashwin Pananjady, Peter Bartlett, Anca Dragan, and Martin J Wainwright. Preference learning along multiple criteria: A game-theoretic perspective. Advances in neural information processing systems, 33:7413–7424, 2020. 20
2307.15217#80
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
81
20 Erdem Biyik. Learning Preferences For Interactive Autonomy. PhD thesis, EE Department, Stanford University, 2022. Erdem Biyik and Dorsa Sadigh. Batch active preference-based learning of reward functions. In Conference on robot learning, pages 519–528. PMLR, 2018. Erdem Biyik, Malayandi Palan, Nicholas C. Landolfi, Dylan P. Losey, and Dorsa Sadigh. Asking easy questions: A user-friendly approach to active reward learning. In Proceedings of the 3rd Conference on Robot Learning (CoRL), 2019. Erdem Biyik, Nicolas Huynh, Mykel J. Kochenderfer, and Dorsa Sadigh. Active preference-based gaussian process regression for reward learning. In Proceedings of Robotics: Science and Systems (RSS), July 2020. doi: 10.15607/rss.2020.xvi.041. Erdem Bıyık, Dylan P Losey, Malayandi Palan, Nicholas C Landolfi, Gleb Shevchuk, and Dorsa Sadigh. Learning reward functions from diverse sources of human feedback: Optimally integrating demonstrations and preferences. The International Journal of Robotics Research, 41(1):45–67, 2022.
2307.15217#81
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
82
Andreea Bobu, Andrea Bajcsy, Jaime F Fisac, Sampada Deglurkar, and Anca D Dragan. Quantifying hypothesis space misspecification in learning from human–robot demonstrations and physical corrections. IEEE Transactions on Robotics, 36(3):835–854, 2020. Andreea Bobu, Andi Peng, Pulkit Agrawal, Julie Shah, and Anca D Dragan. Aligning robot and human representations. arXiv preprint arXiv:2302.01928, 2023. Michael Bowling, John D Martin, David Abel, and Will Dabney. Settling the reward hypothesis. In Inter- national Conference on Machine Learning, pages 3003–3020. PMLR, 2023.
2307.15217#82
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
83
Samuel R. Bowman, Jeeyoon Hyun, Ethan Perez, Edwin Chen, Craig Pettit, Scott Heiner, Kamil˙e Lukoši¯ut˙e, Amanda Askell, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Christo- pher Olah, Daniela Amodei, Dario Amodei, Dawn Drain, Dustin Li, Eli Tran-Johnson, Jackson Kernion, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Liane Lovitt, Nelson Elhage, Nicholas Schiefer, Nicholas Joseph, Noemí Mercado, Nova DasSarma, Robin Larson, Sam McCandlish, Sandipan Kundu, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Timothy Telleen- Lawton, Tom Brown, Tom Henighan, Tristan Hume, Yuntao Bai, Zac Hatfield-Dodds, Ben Mann, and Jared Kaplan. Measuring Progress on Scalable Oversight for Large Language Models, November 2022. URL http://arxiv.org/abs/2211.03540. arXiv:2211.03540 [cs]. CSET Policy Brief. Why ai chips matter. 2020.
2307.15217#83
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
84
CSET Policy Brief. Why ai chips matter. 2020. Daniel Brown, Wonjoon Goo, Prabhat Nagarajan, and Scott Niekum. Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations. In International conference on ma- chine learning, pages 783–792. PMLR, 2019. Daniel Brown, Russell Coleman, Ravi Srinivasan, and Scott Niekum. Safe imitation learning via fast bayesian reward inference from preferences. In International Conference on Machine Learning, pages 1165–1177. PMLR, 2020. Serkan Cabi, Sergio Gómez Colmenarejo, Alexander Novikov, Ksenia Konyushkova, Scott Reed, Rae Jeong, Konrad Zolna, Yusuf Aytar, David Budden, Mel Vecerik, et al. Scaling data-driven robotics with reward sketching and batch reinforcement learning. arXiv preprint arXiv:1909.12200, 2019. Nicholas Carlini, Matthew Jagielski, Christopher A Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum Anderson, Andreas Terzis, Kurt Thomas, and Florian Tramèr. Poisoning web-scale training datasets is practical. arXiv preprint arXiv:2302.10149, 2023a.
2307.15217#84
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
85
Nicholas Carlini, Milad Nasr, Christopher A Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramer, et al. Are aligned neural networks adversarially aligned? arXiv preprint arXiv:2306.15447, 2023b. 21 Micah Carroll, Alan Chan, Henry Ashton, and David Krueger. Characterizing Manipulation from AI Sys- tems, March 2023. URL http://arxiv.org/abs/2303.09387. arXiv:2303.09387 [cs]. Micah D Carroll, Anca Dragan, Stuart Russell, and Dylan Hadfield-Menell. Estimating and penalizing induced preference shifts in recommender systems. In Proceedings of the 39th International Conference on Machine Learning, 2022. Stephen Casper. Achilles heels for agi/asi via decision theoretic adversaries. arXiv preprint arXiv:2010.05418, 2020. Stephen Casper, Dylan Hadfield-Menell, and Gabriel Kreiman. White-box adversarial policies in deep rein- forcement learning. arXiv preprint arXiv:2209.02167, 2022.
2307.15217#85
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
86
Stephen Casper, Yuxiao Li, Jiawei Li, Tong Bu, Kevin Zhang, and Dylan Hadfield-Menell. Benchmarking interpretability tools for deep neural networks. arXiv preprint arXiv:2302.10894, 2023a. Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, and Dylan Hadfield-Menell. Explore, establish, exploit: Red teaming language models from scratch. arXiv preprint arXiv:2306.09442, 2023b. Stephen Casper, Max Nadeau, Dylan Hadfield-Menell, and Gabriel Kreiman. Robust feature-level adversaries are interpretability tools, 2023c. Christopher P Chambers and Federico Echenique. Revealed preference theory, volume 56. Cambridge Uni- versity Press, 2016. Alan Chan, Herbie Bradley, and Nitarshan Rajkumar. Reclaiming the digital commons: A public data trust for training data. arXiv preprint arXiv:2303.09001, 2023a.
2307.15217#86
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
87
Alan Chan, Rebecca Salganik, Alva Markelius, Chris Pang, Nitarshan Rajkumar, Dmitrii Krasheninnikov, Lauro Langosco di Langosco, Zhonghao He, Yawen Duan, Micah Carroll, Michelle Lin, Alex Mayhew, Katherine Collins, Maryam Molamohammadi, John Burden, Wanru Zhao, Shalaleh Rismani, Konstantinos Voudouris, Umang Bhatt, Adrian Weller, David Krueger, and Tegan Maharaj. Harms from increasingly agentic algorithmic systems. ArXiv, abs/2302.10329, 2023b. Lawrence Chan, Dylan Hadfield-Menell, Siddhartha Srinivasa, and Anca Dragan. The Assistive Multi-Armed Bandit. arXiv:1901.08654 [cs, stat], January 2019. URL http://arxiv.org/abs/1901.08654. arXiv: 1901.08654. Angelica Chen, Jérémy Scheurer, Tomasz Korbak, Jon Ander Campos, Jun Shern Chan, Samuel R Bowman, Kyunghyun Cho, and Ethan Perez. Improving code generation by training with natural language feedback. arXiv preprint arXiv:2303.16749, 2023.
2307.15217#87
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
88
Michael Chmielewski and Sarah C Kucker. An mturk crisis? shifts in data quality and the impact on study results. Social Psychological and Personality Science, 11(4):464–473, 2020. # Paul # Christiano. # Worst-case # guarantees. https://ai-alignment.com/ training-robust-corrigibility-ce0e0a3b9b4d, 2019. Paul Christiano. Thoughts on the impact of rlhf research, Jan 2023. URL https://www.alignmentforum. org/posts/vwu4kegAEZTBtpT6p/thoughts-on-the-impact-of-rlhf-research#The_case_for_a_ positive_impact:~:text=I%20think%20it%20is%20hard%20to%20productively%20work%20on% 20more%20challenging%20alignment%20problems%20without%20first%20implementing%20basic% 20solutions. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforce- ment learning from human preferences. Advances in neural information processing systems, 30, 2017. 22
2307.15217#88
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
89
22 Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models, 2022. URL https://arxiv.org/abs/2210.11416. Peter Cihon. Standards for ai governance: international standards to enable global coordination in ai research & development. Future of Humanity Institute. University of Oxford, 2019. Michael K Cohen and Marcus Hutter. Curiosity killed the cat and the asymptotically optimal agent. arXiv preprint arXiv:2006.03357, 2020.
2307.15217#89
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
90
Michael K Cohen and Marcus Hutter. Curiosity killed the cat and the asymptotically optimal agent. arXiv preprint arXiv:2006.03357, 2020. Ajeya Cotra. Why ai alignment could be hard with modern deep learning. https://www.cold-takes.com/ why-ai-alignment-could-be-hard-with-modern-deep-learning/, 2021. Andrew Critch and David Krueger. Ai research considerations for human existential safety (arches). arXiv preprint arXiv:2006.04948, 2020. Audrey Cui, Ali Jahanian, Agata Lapedriza, Antonio Torralba, Shahin Mahdizadehaghdam, Rohit Kumar, and David Bau. Local relighting of real scenes, 2022. Allan Dafoe. Ai governance: a research agenda. Governance of AI Program, Future of Humanity Institute, University of Oxford: Oxford, UK, 1442:1443, 2018. Oliver Daniels-Koch and Rachel Freedman. The expertise problem: Learning from specialized feedback. arXiv preprint arXiv:2211.06519, 2022.
2307.15217#90
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
91
Oliver Daniels-Koch and Rachel Freedman. The expertise problem: Learning from specialized feedback. arXiv preprint arXiv:2211.06519, 2022. Aida Mostafazadeh Davani, Mark Díaz, and Vinodkumar Prabhakaran. Dealing with disagreements: Looking beyond the majority vote in subjective annotations. Transactions of the Association for Computational Linguistics, 10:92–110, 2022. Lauro Langosco Di Langosco, Jack Koch, Lee D Sharkey, Jacob Pfau, and David Krueger. Goal mis- generalization in deep reinforcement learning. In International Conference on Machine Learning, pages 12004–12019. PMLR, 2022. Zihan Ding and Hao Dong. Challenges of reinforcement learning. Deep Reinforcement Learning: Fundamen- tals, Research and Applications, pages 249–272, 2020. Roel Dobbe, Thomas Krendl Gilbert, and Yonatan Mintz. Hard choices in artificial intelligence. Artificial Intelligence, 300:103555, 2021. Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. Raft: Reward ranked finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767, 2023.
2307.15217#91
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
92
Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325, 2023. El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, and John Stephan. Sok: On the impossible security of very large foundation models. arXiv preprint arXiv:2209.15259, 2022. Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock. Gpts are gpts: An early look at the labor market impact potential of large language models, 2023. Kawin Ethayarajh and Dan Jurafsky. The authenticity gap in human evaluation, 2022. Tom Everitt, Marcus Hutter, Ramana Kumar, and Victoria Krakovna. Reward Tampering Problems and Solutions in Reinforcement Learning: A Causal Influence Diagram Perspective. arXiv:1908.04734 [cs], March 2021. URL http://arxiv.org/abs/1908.04734. arXiv: 1908.04734. 23
2307.15217#92
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
93
23 Gregory Falco, Ben Shneiderman, Julia Badger, Ryan Carrier, Anton Dahbura, David Danks, Martin Eling, Alwyn Goodloe, Jerry Gupta, Christopher Hart, et al. Governing ai safety through independent audits. Nature Machine Intelligence, 3(7):566–571, 2021. Michael Feffer, Hoda Heidari, and Zachary C Lipton. Moral machine or tyranny of the majority? arXiv preprint arXiv:2305.17319, 2023. Luciano Floridi and Josh Cowls. A unified framework of five principles for ai in society. Machine learning and the city: Applications in architecture and urban design, pages 535–545, 2022. Rachel Freedman, Rohin Shah, and Anca Dragan. Choice set misspecification in reward inference. arXiv preprint arXiv:2101.07691, 2021. Aaron French. The mandela effect and new memory. Correspondences, 6(2), 2019. "federal trade commission civil investigative demand schedule ftc file no. 232-3044", July 2023. URL https://www.washingtonpost.com/documents/67a7081c-c770-4f05-a39e-9d02117e50e8.pdf? itid=lk_inline_manual_4.
2307.15217#93
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
94
Justin Fu, Anoop Korattikara, Sergey Levine, and Sergio Guadarrama. From language to goals: Inverse reinforcement learning for vision-based instruction following. arXiv preprint arXiv:1902.07742, 2019. Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. arXiv preprint arXiv:2210.10760, 2022. Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 3356–3369, Online, November 2020. Association for Computa- tional Linguistics. doi: 10.18653/v1/2020.findings-emnlp.301. URL https://aclanthology.org/2020. findings-emnlp.301. Atticus Geiger, Chris Potts, and Thomas Icard. Causal abstraction for faithful model interpretation. arXiv preprint arXiv:2301.04709, 2023. URL https://arxiv.org/pdf/2301.04709.pdf.
2307.15217#94
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
95
Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. Chatgpt outperforms crowd-workers for text-annotation tasks. arXiv preprint arXiv:2303.15056, 2023. Thomas Krendl Gilbert and Andrew Loveridge. Subjectifying objectivity: Delineating tastes in theoretical quantum gravity research. Social Studies of Science, 51(1):73–99, 2021. Thomas Krendl Gilbert, Nathan Lambert, Sarah Dean, Tom Zick, and Aaron Snoswell. Reward reports for reinforcement learning. arXiv preprint arXiv:2204.10817, 2022.
2307.15217#95
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
96
Amelia Glaese, Nat McAleese, Maja Trębacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, Lucy Campbell-Gillingham, Jonathan Uesato, Po- Sen Huang, Ramona Comanescu, Fan Yang, Abigail See, Sumanth Dathathri, Rory Greig, Charlie Chen, Doug Fritz, Jaume Sanchez Elias, Richard Green, Soňa Mokrá, Nicholas Fernando, Boxi Wu, Rachel Foley, Susannah Young, Iason Gabriel, William Isaac, John Mellor, Demis Hassabis, Koray Kavukcuoglu, Lisa Anne Hendricks, and Geoffrey Irving. Improving alignment of dialogue agents via targeted human judgements, 2022. Adam Gleave and Geoffrey Irving. Uncertainty estimation for language reward models. arXiv preprint arXiv:2203.07472, 2022. Adam Gleave, Michael Dennis, Shane Legg, Stuart Russell, and Jan Leike. Quantifying differences in reward functions. arXiv preprint arXiv:2006.13900, 2020a.
2307.15217#96
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
97
Adam Gleave, Michael Dennis, Cody Wild, Neel Kant, Sergey Levine, and Stuart Russell. Adversarial policies: Attacking deep reinforcement learning. In International Conference on Learning Representations, 2020b. URL https://openreview.net/forum?id=HJgEMpVFwB. 24 Dongyoung Go, Tomasz Korbak, Germán Kruszewski, Jos Rozen, Nahyeon Ryu, and Marc Dymetman. Aligning language models with preferences through f-divergence minimization, 2023. Google. Bard, 2023. URL https://bard.google.com/. Mitchell L Gordon, Kaitlyn Zhou, Kayur Patel, Tatsunori Hashimoto, and Michael S Bernstein. The disagree- ment deconvolution: Bringing machine learning performance metrics in line with reality. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–14, 2021. Mitchell L Gordon, Michelle S Lam, Joon Sung Park, Kayur Patel, Jeff Hancock, Tatsunori Hashimoto, and Michael S Bernstein. Jury learning: Integrating dissenting voices into machine learning models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pages 1–19, 2022.
2307.15217#97
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
98
Prasoon Goyal, Scott Niekum, and Raymond J Mooney. Using natural language for reward shaping in reinforcement learning. arXiv preprint arXiv:1903.02020, 2019. Lewis D. Griffin, Bennett Kleinberg, Maximilian Mozes, Kimberly T. Mai, Maria Vau, Matthew Caldwell, and Augustine Marvor-Parker. Susceptibility to Influence of Large Language Models, March 2023. URL http://arxiv.org/abs/2303.06074. arXiv:2303.06074 [cs]. Luke Guerdan, Amanda Coston, Zhiwei Steven Wu, and Kenneth Holstein. Ground (less) truth: A causal framework for proxy labels in human-algorithm decision-making. arXiv preprint arXiv:2302.06503, 2023. Gillian K Hadfield and Jack Clark. Regulatory markets: The future of ai governance. arXiv preprint arXiv:2304.04914, 2023. Dylan Hadfield-Menell, Stuart J Russell, Pieter Abbeel, and Anca Dragan. Cooperative inverse reinforcement learning. Advances in neural information processing systems, 29, 2016.
2307.15217#98
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
99
Dylan Hadfield-Menell, Smitha Milli, Pieter Abbeel, Stuart J Russell, and Anca Dragan. Inverse reward design. Advances in neural information processing systems, 30, 2017. Karen out the-hidden-workforce-that-helped-filter-violence-and-abuse-out-of-chatgpt/ ffc2427f-bdd8-47b7-9a4b-27e7267cf413. Jochen Hartmann, Jasper Schwenzow, and Maximilian Witte. The political ideology of conversational arXiv preprint ai: Converging evidence on chatgpt’s pro-environmental, arXiv:2301.01768, 2023. left-libertarian orientation. Peter Hase, Mohit Bansal, Been Kim, and Asma Ghandeharioun. Does localization inform editing? surpris- ing differences in causality-based localization vs. knowledge editing in language models. arXiv preprint arXiv:2301.04213, 2023. URL https://arxiv.org/pdf/2301.04213.pdf. Joey Hejna and Dorsa Sadigh. Few-shot preference learning for human-in-the-loop rl. In Proceedings of the 6th Conference on Robot Learning (CoRL), 2022.
2307.15217#99
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
100
Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep In Proceedings of the AAAI conference on artificial intelligence, reinforcement learning that matters. volume 32, 2018. Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136, 2016. Dan Hendrycks, Nicholas Carlini, John Schulman, and Jacob Steinhardt. Unsolved problems in ml safety. arXiv preprint arXiv:2109.13916, 2021. Evan Hernandez, Belinda Z Li, and Jacob Andreas. Measuring and manipulating knowledge representations in language models. arXiv preprint arXiv:2304.00740, 2023. 25 Jacob Hilton, Nick Cammarata, Shan Carter, Gabriel Goh, and Chris Olah. Understanding rl vision. Distill, 5(11):e29, 2020. Joey Hong, Kush Bhatia, and Anca Dragan. On the sensitivity of reward inference to misspecified human models. arXiv preprint arXiv:2212.04717, 2022.
2307.15217#100
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
101
models. arXiv preprint arXiv:2212.04717, 2022. Keith Hoskin. The ‘awful idea of accountability’: inscribing people into the measurement of objects. Ac- countability: Power, ethos and the technologies of managing, 265, 1996. Jian Hu, Li Tao, June Yang, and Chandler Zhou. Aligning language models with offline reinforcement learning from human feedback. arXiv preprint arXiv:2308.12050, 2023. Evan Hubinger. An overview of 11 proposals for building safe advanced ai. arXiv preprint arXiv:2012.07532, 2020. Borja Ibarz, Jan Leike, Tobias Pohlen, Geoffrey Irving, Shane Legg, and Dario Amodei. Reward learning from human preferences and demonstrations in atari. Advances in neural information processing systems, 31, 2018. Alex Irpan. Deep reinforcement learning doesn’t work yet. https://www.alexirpan.com/2018/02/14/ rl-hard.html, 2018. Geoffrey Irving, Paul Christiano, and Dario Amodei. Ai safety via debate. arXiv preprint arXiv:1805.00899, 2018.
2307.15217#101
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
102
Geoffrey Irving, Paul Christiano, and Dario Amodei. Ai safety via debate. arXiv preprint arXiv:1805.00899, 2018. Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in ai. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 624–635, 2021. URL https://arxiv.org/pdf/2010.07487.pdf. Nils Jansen, Bettina Könighofer, Sebastian Junges, Alexandru C Serban, and Roderick Bloem. Safe rein- forcement learning via probabilistic shields. arXiv preprint arXiv:1807.06096, 2018. Hong Jun Jeon, Smitha Milli, and Anca Dragan. Reward-rational (implicit) choice: A unifying formalism for reward learning. Advances in Neural Information Processing Systems, 33:4415–4426, 2020.
2307.15217#102
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
103
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1–38, 2023. Suzanne Junod. Fda and clinical drug trials: a short history. FDLI Update, page 55, 2008. Zachary Kenton, Tom Everitt, Laura Weidinger, Iason Gabriel, Vladimir Mikulik, and Geoffrey Irving. Alignment of Language Agents, March 2021. URL http://arxiv.org/abs/2103.14659. arXiv:2103.14659 [cs]. Muhammad Khalifa, Hady Elsahar, and Marc Dymetman. A distributional approach to controlled text generation. In International Conference on Learning Representations, 2021. URL https://openreview. net/forum?id=jWkw45-9AbL. Heidy Khlaaf. Toward comprehensive risk assessments and assurance of ai-based systems. Trail of Bits, 2023.
2307.15217#103
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
104
Heidy Khlaaf. Toward comprehensive risk assessments and assurance of ai-based systems. Trail of Bits, 2023. Sungdong Kim, Sanghwan Bae, Jamin Shin, Soyoung Kang, Donghyun Kwak, Kang Min Yoo, and Minjoon Seo. Aligning large language models through synthetic feedback. arXiv preprint arXiv:2305.13735, 2023. Hannah Rose Kirk, Bertie Vidgen, Paul Röttger, and Scott A Hale. Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback. arXiv preprint arXiv:2303.05453, 2023. W Bradley Knox and Peter Stone. Tamer: Training an agent manually via evaluative reinforcement. In 2008 7th IEEE international conference on development and learning, pages 292–297. IEEE, 2008. 26 W Bradley Knox, Stephane Hatgis-Kessell, Serena Booth, Scott Niekum, Peter Stone, and Alessandro Allievi. Models of human preference for learning reward functions. arXiv preprint arXiv:2206.02231, 2022.
2307.15217#104
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
105
On reinforcement learning and distribution matching for fine-tuning language models with no catastrophic forget- ting. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 16203–16220. Curran As- sociates, URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ 67496dfa96afddab795530cc7c69b57a-Paper-Conference.pdf. Tomasz Korbak, Ethan Perez, and Christopher Buckley. RL with KL penalties is better viewed as Bayesian inference. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 1083–1091, Abu Dhabi, United Arab Emirates, December 2022b. Association for Computational Linguistics. URL https://aclanthology.org/2022.findings-emnlp.77. Tomasz Korbak, Kejian Shi, Angelica Chen, Rasika Bhalerao, Christopher L. Buckley, Jason Phang, Samuel R. Bowman, and Ethan Perez. Pretraining language models with human preferences, 2023.
2307.15217#105
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
106
Jernej Kos and Dawn Song. arXiv:1705.06452, 2017. Delving into adversarial attacks on deep policies. arXiv preprint Victoria Krakovna and Janos Kramar. Power-seeking can be probable and predictive for trained agents. arXiv preprint arXiv:2304.06528, 2023. Victoria Krakovna, Jonathan Uesato, Vladimir Mikulik, Matthew Rahtz, Tom Everitt, Ramana Kumar, Zac Kenton, Jan Leike, and Shane Legg. Specification gaming: the flip side of ai ingenuity. DeepMind Blog, 2020. Dmitrii Krasheninnikov, Egor Krasheninnikov, and David Krueger. Assistance with large language models. In NeurIPS ML Safety Workshop, 2022. David Krueger, Tegan Maharaj, and Jan Leike. Hidden incentives for auto-induced distributional shift, 2020. Deepak Kumar, Patrick Gage Kelley, Sunny Consolvo, Joshua Mason, Elie Bursztein, Zakir Durumeric, Kurt Thomas, and Michael Bailey. Designing toxic content classification for a diversity of perspectives. In SOUPS@ USENIX Security Symposium, pages 299–318, 2021. Stefan Larsson and Fredrik Heintz. Transparency in artificial intelligence. Internet Policy Review, 9(2), 2020.
2307.15217#106
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
107
Stefan Larsson and Fredrik Heintz. Transparency in artificial intelligence. Internet Policy Review, 9(2), 2020. Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Car- bune, and Abhinav Rastogi. Rlaif: Scaling reinforcement learning from human feedback with ai feedback, 2023. Kimin Lee, Laura Smith, and Pieter Abbeel. Pebble: Feedback-efficient interactive reinforcement learning via relabeling experience and unsupervised pre-training. arXiv preprint arXiv:2106.05091, 2021. Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, and Shane Legg. Scalable agent alignment via reward modeling: a research direction. arXiv preprint arXiv:1811.07871, 2018. Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020. Haoran Li, Dadi Guo, Wei Fan, Mingshi Xu, and Yangqiu Song. Multi-step jailbreaking privacy attacks on chatgpt. arXiv preprint arXiv:2304.05197, 2023a.
2307.15217#107
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
108
chatgpt. arXiv preprint arXiv:2304.05197, 2023a. Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. Inference-time intervention: Eliciting truthful answers from a language model, 2023b. 27 Mengxi Li, Alper Canberk, Dylan P Losey, and Dorsa Sadigh. Learning human objectives from sequences of physical corrections. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 2877–2883. IEEE, 2021. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110, 2022a. Xinran Liang, Katherine Shu, Kimin Lee, and Pieter Abbeel. Reward uncertainty for exploration in preference-based reinforcement learning. In International Conference on Learning Representations, 2022b.
2307.15217#108
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
109
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint arXiv:2305.20050, 2023. Jessy Lin, Daniel Fried, Dan Klein, and Anca Dragan. Inferring rewards from language in context. arXiv preprint arXiv:2204.02515, 2022. David Lindner and Mennatallah El-Assady. Humans are not boltzmann distributions: Challenges and opportunities for modelling human feedback and interaction in reinforcement learning. arXiv preprint arXiv:2206.13316, 2022. David Lindner, Xin Chen, Sebastian Tschiatschek, Katja Hofmann, and Andreas Krause. Learning safety constraints from demonstrations with unknown rewards. arXiv preprint arXiv:2305.16147, 2023. Yi Liu, Gelei Deng, Zhengzi Xu, Yuekang Li, Yaowen Zheng, Ying Zhang, Lida Zhao, Tianwei Zhang, and Yang Liu. Jailbreaking chatgpt via prompt engineering: An empirical study, 2023.
2307.15217#109
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
110
Dylan P Losey, Andrea Bajcsy, Marcia K O’Malley, and Anca D Dragan. Physical interaction as communi- cation: Learning robot objectives online from human corrections. The International Journal of Robotics Research, 41(1):20–44, 2022. Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob Foerster, Jacob Andreas, Edward Grefenstette, Shimon Whiteson, and Tim Rocktäschel. A survey of reinforcement learning informed by natural language. arXiv preprint arXiv:1906.03926, 2019. James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David L Roberts, Matthew E Taylor, and Michael L Littman. Interactive learning from policy-dependent human feedback. In International Conference on Machine Learning, pages 2285–2294. PMLR, 2017. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023.
2307.15217#110
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
111
Shehryar Malik, Usman Anwar, Alireza Aghasi, and Ali Ahmed. Inverse constrained reinforcement learning. In International conference on machine learning, pages 7390–7399. PMLR, 2021. David Manheim and Scott Garrabrant. Categorizing variants of goodhart’s law. arXiv preprint arXiv:1803.04585, 2018. Lev McKinney, Yawen Duan, David Krueger, and Adam Gleave. On the fragility of learned reward functions. arXiv preprint arXiv:2301.03652, 2023. Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. Mass-editing memory in a transformer. arXiv preprint arXiv:2210.07229, 2022. Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in gpt, 2023. 28 Eric J Michaud, Adam Gleave, and Stuart Russell. Understanding learned reward functions. arXiv preprint arXiv:2012.05862, 2020. Silvia Milano, Mariarosaria Taddeo, and Luciano Floridi. Ethical aspects of multi-stakeholder recommenda- tion systems. The information society, 37(1):35–45, 2021.
2307.15217#111
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
112
Smitha Milli and Anca D Dragan. Literal or pedagogic human? analyzing human model misspecification in objective learning. In Uncertainty in artificial intelligence, pages 925–934. PMLR, 2020. Soren Mindermann and Stuart Armstrong. Occam’s razor is insufficient to infer the preferences of irrational agents. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18, page 5603–5614, Red Hook, NY, USA, 2018. Curran Associates Inc. Jakob Mökander, Jonas Schuett, Hannah Rose Kirk, and Luciano Floridi. Auditing large language models: a three-layered approach. arXiv preprint arXiv:2302.08500, 2023. Vivek Myers, Erdem Biyik, Nima Anari, and Dorsa Sadigh. Learning multimodal rewards from rankings. In Conference on Robot Learning, pages 342–352. PMLR, 2021. United States National Commission for the Protection of Human Subjects. The Belmont report: ethical principles and guidelines for the protection of human subjects of research, volume 1. United States De- partment of Health, Education, and Welfare, National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1978.
2307.15217#112
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
113
Andrew Y Ng, Stuart Russell, et al. Algorithms for inverse reinforcement learning. In Icml, volume 1, page 2, 2000. Richard Ngo. The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626, 2022. Khanh Nguyen, Hal Daumé III, and Jordan Boyd-Graber. Reinforcement learning for bandit neural machine translation with simulated human feedback. arXiv preprint arXiv:1707.07402, 2017. Evgenii Nikishin, Pavel Izmailov, Ben Athiwaratkun, Dmitrii Podoprikhin, Timur Garipov, Pavel Shvechikov, Dmitry Vetrov, and Andrew Gordon Wilson. Improving stability in deep reinforcement learning with weight averaging. In Uncertainty in artificial intelligence workshop on uncertainty in Deep learning, 2018. Ritesh Noothigattu, Snehalkumar Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Raviku- mar, and Ariel Procaccia. A voting-based system for ethical decision making. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
2307.15217#113
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
114
Cullen O’Keefe, Peter Cihon, Ben Garfinkel, Carrick Flynn, Jade Leung, and Allan Dafoe. The windfall clause: Distributing the benefits of ai for the common good. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 327–331, 2020. Salima Omar, Asri Ngadi, and Hamid H Jebur. Machine learning techniques for anomaly detection: an overview. International Journal of Computer Applications, 79(2), 2013. A.J. Oneal. (and other 6f4f7b30129b0251f61fa7baaa881516, 2023. Chat gpt "dan" "jailbreaks"). https://gist.github.com/coolaj86/ OpenAI. Gpt-4 technical report, 2023. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
2307.15217#114
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
115
Alexander Pan, Kush Bhatia, and Jacob Steinhardt. The effects of reward misspecification: Mapping and mitigating misaligned models. arXiv preprint arXiv:2201.03544, 2022. 29 Rahul Pandey, Hemant Purohit, Carlos Castillo, and Valerie L Shalin. Modeling and mitigating human annotation errors to design efficient stream processing systems with human-in-the-loop machine learning. International Journal of Human-Computer Studies, 160:102772, 2022. Guansong Pang, Chunhua Shen, Longbing Cao, and Anton Van Den Hengel. Deep learning for anomaly detection: A review. ACM computing surveys (CSUR), 54(2):1–38, 2021. Andi Peng, Besmira Nushi, Emre Kıcıman, Kori Inkpen, Siddharth Suri, and Ece Kamar. What you see is what you get? the impact of representation criteria on human bias in hiring. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, volume 7, pages 125–134, 2019.
2307.15217#115
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
116
Andi Peng, Besmira Nushi, Emre Kiciman, Kori Inkpen, and Ece Kamar. Investigations of performance and bias in human-ai teamwork in hiring. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 12089–12097, 2022. Andi Peng, Aviv Netanyahu, Mark K Ho, Tianmin Shu, Andreea Bobu, Julie Shah, and Pulkit Agrawal. Diagnosis, feedback, adaptation: A human-in-the-loop framework for test-time policy adaptation. In Proceedings of the 40th International Conference on Machine Learning, 2023. Ethan Perez, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, and Geoffrey Irving. Red teaming language models with language models. arXiv preprint arXiv:2202.03286, 2022a. Ethan Perez, Sam Ringer, Kamil˙e Lukoši¯ut˙e, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, et al. Discovering language model behaviors with model-written evaluations. arXiv preprint arXiv:2212.09251, 2022b.
2307.15217#116
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
117
Billy Perrigo. Exclusive: The $2 per hour workers who made chatgpt safer, 2023. URL https://time.com/ 6247678/openai-chatgpt-kenya-workers/. [Accessed 07-May-2023]. Brandon Perry and Risto Uuk. Ai governance and the policymaking process: key considerations for reducing ai risk. Big data and cognitive computing, 3(2):26, 2019. Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh. Do users write more insecure code with ai assistants?, 2022. Vinodkumar Prabhakaran, Aida Mostafazadeh Davani, and Mark Diaz. On releasing annotator-level labels and information in datasets. arXiv preprint arXiv:2110.05699, 2021. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023.
2307.15217#117
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
118
Deepak Ramachandran and Eyal Amir. Bayesian inverse reinforcement learning. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI’07, page 2586–2591, San Francisco, CA, USA, 2007. Morgan Kaufmann Publishers Inc. Rajkumar Ramamurthy, Prithviraj Ammanabrolu, Kianté Brantley, Jack Hessel, Rafet Sifa, Christian Bauck- hage, Hannaneh Hajishirzi, and Yejin Choi. Is reinforcement learning (not) for natural language process- ing?: Benchmarks, baselines, and building blocks for natural language policy optimization. arXiv preprint arXiv:2210.01241, 2022. Alexandre Rame, Guillaume Couairon, Mustafa Shukor, Corentin Dancette, Jean-Baptiste Gaya, Laure Soulier, and Matthieu Cord. Rewarded soups: towards pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards. arXiv preprint arXiv:2306.04488, 2023.
2307.15217#118
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
119
Abhinav Rao, Sachin Vashistha, Atharva Naik, Somak Aditya, and Monojit Choudhury. Tricking llms into disobedience: Understanding, analyzing, and preventing jailbreaks, 2023. 30 Charvi Rastogi, Marco Tulio Ribeiro, Nicholas King, and Saleema Amershi. Supporting human-ai collabora- tion in auditing llms with llms. arXiv preprint arXiv:2304.09991, 2023. URL https://arxiv.org/pdf/ 2304.09991.pdf. Tilman Räuker, Anson Ho, Stephen Casper, and Dylan Hadfield-Menell. Toward transparent ai: A survey on interpreting the inner structures of deep neural networks. In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), pages 464–483. IEEE, 2023. Siddharth Reddy, Anca D. Dragan, and Sergey Levine. Where Do You Think You’re Going?: Inferring Beliefs about Dynamics from Behavior. arXiv:1805.08010 [cs, stat], January 2019. URL http://arxiv. org/abs/1805.08010. arXiv: 1805.08010.
2307.15217#119
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
120
Siddharth Reddy, Sergey Levine, and Anca D Dragan. Assisted Perception: Optimizing Observations to Communicate State. 2020. Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian person- alized ranking from implicit feedback. arXiv preprint arXiv:1205.2618, 2012. Dorsa Sadigh, Anca D Dragan, Shankar Sastry, and Sanjit A Seshia. Active preference-based learning of reward functions. 2017.
2307.15217#120
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
121
Dorsa Sadigh, Anca D Dragan, Shankar Sastry, and Sanjit A Seshia. Active preference-based learning of reward functions. 2017. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=9Vrb9D0WI4.
2307.15217#121
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]
2307.15217
122
Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, and Tatsunori Hashimoto. Whose opinions do language models reflect? arXiv preprint arXiv:2303.17548, 2023. Laura Sartori and Andreas Theodorou. A sociotechnical perspective for the future of ai: narratives, inequal- ities, and human control. Ethics and Information Technology, 24(1):4, 2022. William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan Leike. Self-critiquing models for assisting human evaluators. arXiv preprint arXiv:2206.05802, 2022. Jérémy Scheurer, Jon Ander Campos, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, and Ethan Perez. In The First Workshop on Learning with Natural Training language models with language feedback. Language Supervision at ACL, 2022. Jérémy Scheurer, Jon Ander Campos, Tomasz Korbak, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, and Ethan Perez. Training language models with language feedback at scale. arXiv preprint arXiv:2303.16755, 2023.
2307.15217#122
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.
http://arxiv.org/pdf/2307.15217
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell
cs.AI, cs.CL, cs.LG
null
null
cs.AI
20230727
20230911
[ { "id": "2305.20050" }, { "id": "2302.01928" }, { "id": "2210.10760" }, { "id": "2304.09991" }, { "id": "2211.06519" }, { "id": "2305.17319" }, { "id": "2109.13916" }, { "id": "1909.12200" }, { "id": "2305.16147" }, { "id": "2301.00901" }, { "id": "2005.01643" }, { "id": "2006.03357" }, { "id": "2306.04488" }, { "id": "2304.05197" }, { "id": "2210.01241" }, { "id": "2110.05699" }, { "id": "2101.07691" }, { "id": "2303.17548" }, { "id": "2301.11270" }, { "id": "2305.17608" }, { "id": "2301.04709" }, { "id": "2211.14275" }, { "id": "1811.07871" }, { "id": "1903.02020" }, { "id": "2303.15056" }, { "id": "2012.07532" }, { "id": "2201.03544" }, { "id": "2303.06074" }, { "id": "2209.02167" }, { "id": "2304.06528" }, { "id": "2305.14710" }, { "id": "2305.18290" }, { "id": "2301.01768" }, { "id": "1803.04585" }, { "id": "2211.09110" }, { "id": "2305.15324" }, { "id": "2304.04914" }, { "id": "2211.00241" }, { "id": "2204.10817" }, { "id": "2206.13316" }, { "id": "2305.14325" }, { "id": "2303.09001" }, { "id": "1909.08593" }, { "id": "2308.12050" }, { "id": "2204.02515" }, { "id": "2302.08500" }, { "id": "1906.03926" }, { "id": "2204.05186" }, { "id": "2209.00626" }, { "id": "2202.03286" }, { "id": "2012.05862" }, { "id": "2305.13534" }, { "id": "2307.02483" }, { "id": "1805.00899" }, { "id": "2303.16755" }, { "id": "2302.10894" }, { "id": "2006.13900" }, { "id": "2302.06503" }, { "id": "1908.04734" }, { "id": "1805.08010" }, { "id": "2305.08844" }, { "id": "1901.08654" }, { "id": "2204.05862" }, { "id": "1705.06452" }, { "id": "2306.08647" }, { "id": "2206.05802" }, { "id": "2303.09387" }, { "id": "2305.11455" }, { "id": "2203.07472" }, { "id": "2210.07229" }, { "id": "2106.05091" }, { "id": "2308.03825" }, { "id": "1610.02136" }, { "id": "2301.04213" }, { "id": "2304.00740" }, { "id": "1807.06096" }, { "id": "2010.14603" }, { "id": "1707.07402" }, { "id": "2302.10149" }, { "id": "2212.03201" }, { "id": "2303.00894" }, { "id": "2303.05453" }, { "id": "2304.06767" }, { "id": "2304.11082" }, { "id": "2109.06668" }, { "id": "1902.04257" }, { "id": "2210.01790" }, { "id": "2206.02231" }, { "id": "2306.07899" }, { "id": "1902.07742" }, { "id": "2109.00157" }, { "id": "2010.05418" }, { "id": "2306.15447" }, { "id": "2212.08073" }, { "id": "1606.06565" }, { "id": "2209.15259" }, { "id": "2211.03540" }, { "id": "2212.04717" }, { "id": "2301.03652" }, { "id": "2306.09442" }, { "id": "2305.13735" }, { "id": "2303.16749" }, { "id": "2212.09251" }, { "id": "2209.13085" }, { "id": "2303.17651" }, { "id": "2103.14659" }, { "id": "2305.11206" }, { "id": "2006.04948" } ]