doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.00986 | 42 | = what's in this picture © | will call a ModelScope's multimodal dialog model api. The image features a blue and purple pixelated face, with a large, white eye in, Curerty, ModolScope only supports the LLM of the mutiingual ChatGPT mov! c 7 that Is compatibe wth English, which ie named mPLUG-ovs. Thretor,| wil fest use ModeâScope's Chinese-to-Engsh vanslation modelo conwert your prompt ito Engl, and thon eal the Engieh mPLUG-@Â¥l model. Cating the Chineso-to= Engieh anslaton model fo eonart the promot to Engleh © Bai ModelScope RAKHI BK ChatGPTAWLLM, SFO MPLUG-owl , FABLE HSHEGMIDRRA ModelScope hPa ERE CHEK, MRISFRAFBREKHY MPLUG-Owl HRB WAPRBHALI prompt HEX BAA (@38 EAI: What's in this picture? 281518 mPLUG-owl BEER The image features a blue and purple pixelated face, which appears to. be a cartoon. clear || image ons Peony | 2309.00986#42 | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | Large language models (LLMs) have recently demonstrated remarkable
capabilities to comprehend human intentions, engage in reasoning, and design
planning-like behavior. To further unleash the power of LLMs to accomplish
complex tasks, there is a growing trend to build agent framework that equips
LLMs, such as ChatGPT, with tool-use abilities to connect with massive external
APIs. In this work, we introduce ModelScope-Agent, a general and customizable
agent framework for real-world applications, based on open-source LLMs as
controllers. It provides a user-friendly system library, with customizable
engine design to support model training on multiple open-source LLMs, while
also enabling seamless integration with both model APIs and common APIs in a
unified way. To equip the LLMs with tool-use abilities, a comprehensive
framework has been proposed spanning over tool-use data collection, tool
retrieval, tool registration, memory control, customized model training, and
evaluation for practical real-world applications. Finally, we showcase
ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based
on the ModelScope-Agent framework, which is able to connect open-source LLMs
with more than 1000 public AI models and localized community knowledge in
ModelScope. The ModelScope-Agent
library\footnote{https://github.com/modelscope/modelscope-agent} and online
demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now
publicly available. | http://arxiv.org/pdf/2309.00986 | Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou | cs.CL | null | null | cs.CL | 20230902 | 20230902 | [
{
"id": "2304.07849"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2302.04761"
},
{
"id": "2304.08244"
},
{
"id": "2211.01786"
},
{
"id": "2204.01691"
},
{
"id": "2304.08354"
},
{
"id": "2203.15556"
},
{
"id": "2303.04671"
},
{
"id": "2305.18752"
},
{
"id": "2306.05301"
},
{
"id": "2112.11446"
}
] |
2309.00986 | 43 | Figure 6: Single-step tool-use instructions, image-chat cases. Testing the model using the same semantic instruction in both English (left) and Chinese (right).
# C Cases
In this section, we show the qualitative results about ModelScopeGPT implementation based on ModelScope-Agent.
tional copy first, then read it, and finally generate a video. These instructions require the model to have the ability of multi-step Tool use. In the Chinese case, our model accurately completed the three- step tool use.
Single-step Tool Use As shown in Figure 5 and 6, the instruction expects the model to generate a video and chat about the image respectively. These instructions can be completed with a single step of tool use.
Multi-step Tool Use As shown in Figure 7, the instruction expects the model to write the promoMulti-turn Tool Use As shown in Figure 8, the instruction requires the model to have the ability to multi-turn conversation and use the history conver- sation. Our model can accurately call the API and capture the content of the previous conversation to generate API parameters. | 2309.00986#43 | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | Large language models (LLMs) have recently demonstrated remarkable
capabilities to comprehend human intentions, engage in reasoning, and design
planning-like behavior. To further unleash the power of LLMs to accomplish
complex tasks, there is a growing trend to build agent framework that equips
LLMs, such as ChatGPT, with tool-use abilities to connect with massive external
APIs. In this work, we introduce ModelScope-Agent, a general and customizable
agent framework for real-world applications, based on open-source LLMs as
controllers. It provides a user-friendly system library, with customizable
engine design to support model training on multiple open-source LLMs, while
also enabling seamless integration with both model APIs and common APIs in a
unified way. To equip the LLMs with tool-use abilities, a comprehensive
framework has been proposed spanning over tool-use data collection, tool
retrieval, tool registration, memory control, customized model training, and
evaluation for practical real-world applications. Finally, we showcase
ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based
on the ModelScope-Agent framework, which is able to connect open-source LLMs
with more than 1000 public AI models and localized community knowledge in
ModelScope. The ModelScope-Agent
library\footnote{https://github.com/modelscope/modelscope-agent} and online
demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now
publicly available. | http://arxiv.org/pdf/2309.00986 | Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou | cs.CL | null | null | cs.CL | 20230902 | 20230902 | [
{
"id": "2304.07849"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2302.04761"
},
{
"id": "2304.08244"
},
{
"id": "2211.01786"
},
{
"id": "2204.01691"
},
{
"id": "2304.08354"
},
{
"id": "2203.15556"
},
{
"id": "2303.04671"
},
{
"id": "2305.18752"
},
{
"id": "2306.05301"
},
{
"id": "2112.11446"
}
] |
2309.00986 | 44 | SâRXF Vision Pro VR MRM 20 FRR, HBR, SPERMS Write a 20-word promotional copy about Vision Pro VR glasses, have a female voice read it out, i] Vision DOVE RG: EGE. SIGMA aA eaâ Extraordinary Sensations! and create a video to see how it looks. m Experience a whole new world with Vision Pro VR. Say goodbye to limitations and immerse g 920/ 0-04 oe een eee awe anne renee a . ey nT . . rently, ModelScope only supports video generation models for English language, so | will first use ModelScope's English-Chinese ourself in vivid, lifelike visuals. Get your hands on these cutting-edge glasses today! âanstation mode i i ish vi i : us a erate) a Bi Modeiscope RAS ASME MN, MMLEH EMNOMIE ModelScope MPiERRE BEBKARX, REBABRNAORME DRL 0 $< ââââ 1 BIZEMIFS: Vision Pro VR glasses: real scene, immersive, extraordinary experience! Translated instruction sana nome Keel ld | send clear || image send | 2309.00986#44 | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | Large language models (LLMs) have recently demonstrated remarkable
capabilities to comprehend human intentions, engage in reasoning, and design
planning-like behavior. To further unleash the power of LLMs to accomplish
complex tasks, there is a growing trend to build agent framework that equips
LLMs, such as ChatGPT, with tool-use abilities to connect with massive external
APIs. In this work, we introduce ModelScope-Agent, a general and customizable
agent framework for real-world applications, based on open-source LLMs as
controllers. It provides a user-friendly system library, with customizable
engine design to support model training on multiple open-source LLMs, while
also enabling seamless integration with both model APIs and common APIs in a
unified way. To equip the LLMs with tool-use abilities, a comprehensive
framework has been proposed spanning over tool-use data collection, tool
retrieval, tool registration, memory control, customized model training, and
evaluation for practical real-world applications. Finally, we showcase
ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based
on the ModelScope-Agent framework, which is able to connect open-source LLMs
with more than 1000 public AI models and localized community knowledge in
ModelScope. The ModelScope-Agent
library\footnote{https://github.com/modelscope/modelscope-agent} and online
demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now
publicly available. | http://arxiv.org/pdf/2309.00986 | Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou | cs.CL | null | null | cs.CL | 20230902 | 20230902 | [
{
"id": "2304.07849"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2302.04761"
},
{
"id": "2304.08244"
},
{
"id": "2211.01786"
},
{
"id": "2204.01691"
},
{
"id": "2304.08354"
},
{
"id": "2203.15556"
},
{
"id": "2303.04671"
},
{
"id": "2305.18752"
},
{
"id": "2306.05301"
},
{
"id": "2112.11446"
}
] |
2309.00986 | 46 | Write a brief story using only 20 words 5â+ 20 FEA HWS <2] =a z ââ PP = S Lost in the city, he met her. Together they overcame adversity and Mal, BABA, CST RRA MMNRM. âK, CAHTâRe BEW)NRF, FACRERMRAVRT, ANRFRR. NRF OER, If MESORAC AT. MEBVMBE, FANT NRFHRY. Once upon a time, there was a clever fox who was good at tricking other animals to get their food. One day, It saw a cute little rabbit and decided to pretend to be injured, asking for help PES from the rabbit. The kind-hearted rabbit took the fox home and treated its injuries. However, the fox took advantage of the situation and ran away, stealing the rabbit's food. seni 0200 f 0:06 ee Gj 0:00 f 0:17 oi found their way back home. read the story with a female voice And could you draw me a picture of the story 3x ERK =A © $280, RIAA ModelScope BIHS#t RUE RE REY ES clear | | image Bey esencrate clear | image | send (eee | 2309.00986#46 | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | Large language models (LLMs) have recently demonstrated remarkable
capabilities to comprehend human intentions, engage in reasoning, and design
planning-like behavior. To further unleash the power of LLMs to accomplish
complex tasks, there is a growing trend to build agent framework that equips
LLMs, such as ChatGPT, with tool-use abilities to connect with massive external
APIs. In this work, we introduce ModelScope-Agent, a general and customizable
agent framework for real-world applications, based on open-source LLMs as
controllers. It provides a user-friendly system library, with customizable
engine design to support model training on multiple open-source LLMs, while
also enabling seamless integration with both model APIs and common APIs in a
unified way. To equip the LLMs with tool-use abilities, a comprehensive
framework has been proposed spanning over tool-use data collection, tool
retrieval, tool registration, memory control, customized model training, and
evaluation for practical real-world applications. Finally, we showcase
ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based
on the ModelScope-Agent framework, which is able to connect open-source LLMs
with more than 1000 public AI models and localized community knowledge in
ModelScope. The ModelScope-Agent
library\footnote{https://github.com/modelscope/modelscope-agent} and online
demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now
publicly available. | http://arxiv.org/pdf/2309.00986 | Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou | cs.CL | null | null | cs.CL | 20230902 | 20230902 | [
{
"id": "2304.07849"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2302.04761"
},
{
"id": "2304.08244"
},
{
"id": "2211.01786"
},
{
"id": "2204.01691"
},
{
"id": "2304.08354"
},
{
"id": "2203.15556"
},
{
"id": "2303.04671"
},
{
"id": "2305.18752"
},
{
"id": "2306.05301"
},
{
"id": "2112.11446"
}
] |
2309.00986 | 48 | MAF RATE SBURAA BSR ERB Introduce ModelScope Does MogelScope have domain dialogue? ModelScope #+EMirF 20226A, E-TRRARLERGMTS, MISLLAMM, RS CCr 4088, BAVA ChatPLUG MB, Co RAP HSA CAM SATA, BPAD. MBA FRRREAR, APLEANA RET. AREA Al TATRARS RULE, SAP MARMOL ea aor onmne âmodel which can take in the user's dialogue history, external Ste, MERRFLDAADANAR, HATHARMRSUARA, AORMOHESHERAR, knowledge related to the dialogue, user persona, and model persona information and output RMEBEARH, SiS ModelScope Community was establishedin June 2022 as an open-source community and innovation appropriate replies. How can | use this model? Is there any code available? platform for model development. It is initiated by Alibaba DAMO Academy and the Open Source WBE baa Development Committee of the China Computer Federation (CCF). The community collaborates... What models are on ModelScope? SRG, BANAT TAB, MLE RRALH RRA MAR, Certainly, we provide sample code that you can use according to your needs and actual | 2309.00986#48 | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | Large language models (LLMs) have recently demonstrated remarkable
capabilities to comprehend human intentions, engage in reasoning, and design
planning-like behavior. To further unleash the power of LLMs to accomplish
complex tasks, there is a growing trend to build agent framework that equips
LLMs, such as ChatGPT, with tool-use abilities to connect with massive external
APIs. In this work, we introduce ModelScope-Agent, a general and customizable
agent framework for real-world applications, based on open-source LLMs as
controllers. It provides a user-friendly system library, with customizable
engine design to support model training on multiple open-source LLMs, while
also enabling seamless integration with both model APIs and common APIs in a
unified way. To equip the LLMs with tool-use abilities, a comprehensive
framework has been proposed spanning over tool-use data collection, tool
retrieval, tool registration, memory control, customized model training, and
evaluation for practical real-world applications. Finally, we showcase
ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based
on the ModelScope-Agent framework, which is able to connect open-source LLMs
with more than 1000 public AI models and localized community knowledge in
ModelScope. The ModelScope-Agent
library\footnote{https://github.com/modelscope/modelscope-agent} and online
demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now
publicly available. | http://arxiv.org/pdf/2309.00986 | Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou | cs.CL | null | null | cs.CL | 20230902 | 20230902 | [
{
"id": "2304.07849"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2302.04761"
},
{
"id": "2304.08244"
},
{
"id": "2211.01786"
},
{
"id": "2204.01691"
},
{
"id": "2304.08354"
},
{
"id": "2203.15556"
},
{
"id": "2303.04671"
},
{
"id": "2305.18752"
},
{
"id": "2306.05301"
},
{
"id": "2112.11446"
}
] |
2309.00986 | 49 | are on ModelScope? SRG, BANAT TAB, MLE RRALH RRA MAR, Certainly, we provide sample code that you can use according to your needs and actual situation with this model. RBtKRE-MSEHRLASHER, SERIFMOTRA, MRRESIARNRAAE, HAG ARB: FRURS UR, HPRAMGAHSSHRRRR. ERBHEL, MIKRAFSAAHRE, Ha Example code BSR, HMMA, BSS. ModelScope is an open-source model development community that aims to build open-source deep learning models through open community cooperation. It also promotes the prosperous developmentof y the model application ecosystem by open-sourcing innovative technologies related to model services..... 2% aaa e ne reset avdiebia? 5 2 | 4MAT, HATER SAMBERT RE, CREEL MAA ERICARENARAMAIEHE. Certainly, we recommend using the SAMBERT model, which excels in generating coherent and rhythmic speech. RVMAFSRRBAE give me the link to the speech synthesis model hi hi | | ! : clear | image | send Ea clear || image send | 2309.00986#49 | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | Large language models (LLMs) have recently demonstrated remarkable
capabilities to comprehend human intentions, engage in reasoning, and design
planning-like behavior. To further unleash the power of LLMs to accomplish
complex tasks, there is a growing trend to build agent framework that equips
LLMs, such as ChatGPT, with tool-use abilities to connect with massive external
APIs. In this work, we introduce ModelScope-Agent, a general and customizable
agent framework for real-world applications, based on open-source LLMs as
controllers. It provides a user-friendly system library, with customizable
engine design to support model training on multiple open-source LLMs, while
also enabling seamless integration with both model APIs and common APIs in a
unified way. To equip the LLMs with tool-use abilities, a comprehensive
framework has been proposed spanning over tool-use data collection, tool
retrieval, tool registration, memory control, customized model training, and
evaluation for practical real-world applications. Finally, we showcase
ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based
on the ModelScope-Agent framework, which is able to connect open-source LLMs
with more than 1000 public AI models and localized community knowledge in
ModelScope. The ModelScope-Agent
library\footnote{https://github.com/modelscope/modelscope-agent} and online
demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now
publicly available. | http://arxiv.org/pdf/2309.00986 | Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou | cs.CL | null | null | cs.CL | 20230902 | 20230902 | [
{
"id": "2304.07849"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2302.04761"
},
{
"id": "2304.08244"
},
{
"id": "2211.01786"
},
{
"id": "2204.01691"
},
{
"id": "2304.08354"
},
{
"id": "2203.15556"
},
{
"id": "2303.04671"
},
{
"id": "2305.18752"
},
{
"id": "2306.05301"
},
{
"id": "2112.11446"
}
] |
2309.00986 | 50 | Figure 9: Multi-turn tool-use instructions, text-to-speech and text-to-image cases. Testing the model using the same semantic instruction in both English(left) and Chinese(right).
In-domain Knowledge QA As shown in Figure 9, the instruction requires the model to retrieve in- domain knowledge and use the retrieved knowledge
to answer questions.
as User Instruction or Clarification as Agent | API Gallery I I GE Follow-up or Final Answer Result age
Figure 10: The data collection procedure of MSAgent- Bench.
# D Data Collection Procedure
We collected our dataset by using prompt engineer to simulate the agent scenarios with two ChatG- PTs (gpt-3.5-turbo). One of the ChatGPTs was prompted to act as the user, while the other was assigned to act as the agent. In order to expand the domains and functionalities of APIs presented in the training data, rather than the exsisting real APIs, we also included a number of synthetic APIs that were generated by ChatGPT. When these syn- thetic APIs were incorporated into the dialogues, we prompted another ChatGPT to serve as the API and return the relevant calling outcomes. | 2309.00986#50 | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | Large language models (LLMs) have recently demonstrated remarkable
capabilities to comprehend human intentions, engage in reasoning, and design
planning-like behavior. To further unleash the power of LLMs to accomplish
complex tasks, there is a growing trend to build agent framework that equips
LLMs, such as ChatGPT, with tool-use abilities to connect with massive external
APIs. In this work, we introduce ModelScope-Agent, a general and customizable
agent framework for real-world applications, based on open-source LLMs as
controllers. It provides a user-friendly system library, with customizable
engine design to support model training on multiple open-source LLMs, while
also enabling seamless integration with both model APIs and common APIs in a
unified way. To equip the LLMs with tool-use abilities, a comprehensive
framework has been proposed spanning over tool-use data collection, tool
retrieval, tool registration, memory control, customized model training, and
evaluation for practical real-world applications. Finally, we showcase
ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based
on the ModelScope-Agent framework, which is able to connect open-source LLMs
with more than 1000 public AI models and localized community knowledge in
ModelScope. The ModelScope-Agent
library\footnote{https://github.com/modelscope/modelscope-agent} and online
demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now
publicly available. | http://arxiv.org/pdf/2309.00986 | Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou | cs.CL | null | null | cs.CL | 20230902 | 20230902 | [
{
"id": "2304.07849"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2302.04761"
},
{
"id": "2304.08244"
},
{
"id": "2211.01786"
},
{
"id": "2204.01691"
},
{
"id": "2304.08354"
},
{
"id": "2203.15556"
},
{
"id": "2303.04671"
},
{
"id": "2305.18752"
},
{
"id": "2306.05301"
},
{
"id": "2112.11446"
}
] |
2309.00986 | 51 | The data collection procedure is shown in Fig- ure 10. Initially, a set of random in-context demon- strations were provided to ChatGPT for generating an instruction. This instruction could either be a regular one or one that requires solving with APIs, depending on the demonstrations provided. Subse- quently, ChatGPT was prompt to act as an agent by first thinking about which action to undertake. If no API calls were deemed necessary, or if the user clarification is needed, the agent would respond with a follow-up response to the user. Otherwise the agent will send API request to the API gallery. After receiving the result of the API call, the agent would assess the situation and decide on the next ac- tion. This iterative process of the "user-agent-API"
loop would continue until the agent determined that it was appropriate to terminate the conversa- tion with the final answer. After acquiring the raw dataset, we applied filtering mechanisms to elim- inate instances in which ChatGPT generated API requests containing hallucinated API names and parameters that were absent from the retrieved API. Additionally, we excluded instances in which Chat- GPT generated illegal API requests, thus resulting in a refined and finalized dataset.
As introduced in Section 3.1, we collect in- stances across different languages and topics, the detailed statistics of our collected data are shown in Table 4. | 2309.00986#51 | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | Large language models (LLMs) have recently demonstrated remarkable
capabilities to comprehend human intentions, engage in reasoning, and design
planning-like behavior. To further unleash the power of LLMs to accomplish
complex tasks, there is a growing trend to build agent framework that equips
LLMs, such as ChatGPT, with tool-use abilities to connect with massive external
APIs. In this work, we introduce ModelScope-Agent, a general and customizable
agent framework for real-world applications, based on open-source LLMs as
controllers. It provides a user-friendly system library, with customizable
engine design to support model training on multiple open-source LLMs, while
also enabling seamless integration with both model APIs and common APIs in a
unified way. To equip the LLMs with tool-use abilities, a comprehensive
framework has been proposed spanning over tool-use data collection, tool
retrieval, tool registration, memory control, customized model training, and
evaluation for practical real-world applications. Finally, we showcase
ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based
on the ModelScope-Agent framework, which is able to connect open-source LLMs
with more than 1000 public AI models and localized community knowledge in
ModelScope. The ModelScope-Agent
library\footnote{https://github.com/modelscope/modelscope-agent} and online
demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now
publicly available. | http://arxiv.org/pdf/2309.00986 | Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou | cs.CL | null | null | cs.CL | 20230902 | 20230902 | [
{
"id": "2304.07849"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2302.04761"
},
{
"id": "2304.08244"
},
{
"id": "2211.01786"
},
{
"id": "2204.01691"
},
{
"id": "2304.08354"
},
{
"id": "2203.15556"
},
{
"id": "2303.04671"
},
{
"id": "2305.18752"
},
{
"id": "2306.05301"
},
{
"id": "2112.11446"
}
] |
2309.00986 | 52 | As introduced in Section 3.1, we collect in- stances across different languages and topics, the detailed statistics of our collected data are shown in Table 4.
Instance Type Chinese English Common API Model API API-Oriented QA API-Agnostic Instruction # Instances 532,436 66,444 211,026 58,338 5,000 329,776
Table 4: The statistics of our collected dataset.
# E Related Work
# E.1 Large Language Models | 2309.00986#52 | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | Large language models (LLMs) have recently demonstrated remarkable
capabilities to comprehend human intentions, engage in reasoning, and design
planning-like behavior. To further unleash the power of LLMs to accomplish
complex tasks, there is a growing trend to build agent framework that equips
LLMs, such as ChatGPT, with tool-use abilities to connect with massive external
APIs. In this work, we introduce ModelScope-Agent, a general and customizable
agent framework for real-world applications, based on open-source LLMs as
controllers. It provides a user-friendly system library, with customizable
engine design to support model training on multiple open-source LLMs, while
also enabling seamless integration with both model APIs and common APIs in a
unified way. To equip the LLMs with tool-use abilities, a comprehensive
framework has been proposed spanning over tool-use data collection, tool
retrieval, tool registration, memory control, customized model training, and
evaluation for practical real-world applications. Finally, we showcase
ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based
on the ModelScope-Agent framework, which is able to connect open-source LLMs
with more than 1000 public AI models and localized community knowledge in
ModelScope. The ModelScope-Agent
library\footnote{https://github.com/modelscope/modelscope-agent} and online
demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now
publicly available. | http://arxiv.org/pdf/2309.00986 | Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou | cs.CL | null | null | cs.CL | 20230902 | 20230902 | [
{
"id": "2304.07849"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2302.04761"
},
{
"id": "2304.08244"
},
{
"id": "2211.01786"
},
{
"id": "2204.01691"
},
{
"id": "2304.08354"
},
{
"id": "2203.15556"
},
{
"id": "2303.04671"
},
{
"id": "2305.18752"
},
{
"id": "2306.05301"
},
{
"id": "2112.11446"
}
] |
2309.00986 | 53 | Recent years have witnessed rapid development in the field of Large Language Models (LLMs). Typ- ical models, such as GPT3 (Brown et al., 2020), Gopher (Rae et al., 2021), Chinchilla (Hoffmann et al., 2022), PaLM (Chowdhery et al., 2022) and LLaMA (Touvron et al., 2023), have shown im- pressive zero and few-shot generalization abilities on a wide range of NLP tasks, by scaling up the model and data size. A remarkable milestone is the release of ChatGPT (OpenAI, 2022) or GPT4 (Ope- nAI, 2023), which has greatly revolutionized the paradigm of AI development. As a result, a rising trend of open-source LLMs has emerged to chal- lenge and catch up their closed-source counterparts like ChatGPT and Claude, such as BLOOM (Muen- nighoff et al., 2022), LLaMA (Touvron et al., 2023), Falcon (Almazrouei et al., 2023), Chat- GLM (THUDM, 2023). Despite the great break- through, LLMs are trained as text generators over plain text corpora, thus performing less well on other tasks such | 2309.00986#53 | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | Large language models (LLMs) have recently demonstrated remarkable
capabilities to comprehend human intentions, engage in reasoning, and design
planning-like behavior. To further unleash the power of LLMs to accomplish
complex tasks, there is a growing trend to build agent framework that equips
LLMs, such as ChatGPT, with tool-use abilities to connect with massive external
APIs. In this work, we introduce ModelScope-Agent, a general and customizable
agent framework for real-world applications, based on open-source LLMs as
controllers. It provides a user-friendly system library, with customizable
engine design to support model training on multiple open-source LLMs, while
also enabling seamless integration with both model APIs and common APIs in a
unified way. To equip the LLMs with tool-use abilities, a comprehensive
framework has been proposed spanning over tool-use data collection, tool
retrieval, tool registration, memory control, customized model training, and
evaluation for practical real-world applications. Finally, we showcase
ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based
on the ModelScope-Agent framework, which is able to connect open-source LLMs
with more than 1000 public AI models and localized community knowledge in
ModelScope. The ModelScope-Agent
library\footnote{https://github.com/modelscope/modelscope-agent} and online
demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now
publicly available. | http://arxiv.org/pdf/2309.00986 | Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou | cs.CL | null | null | cs.CL | 20230902 | 20230902 | [
{
"id": "2304.07849"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2302.04761"
},
{
"id": "2304.08244"
},
{
"id": "2211.01786"
},
{
"id": "2204.01691"
},
{
"id": "2304.08354"
},
{
"id": "2203.15556"
},
{
"id": "2303.04671"
},
{
"id": "2305.18752"
},
{
"id": "2306.05301"
},
{
"id": "2112.11446"
}
] |
2309.00986 | 54 | GLM (THUDM, 2023). Despite the great break- through, LLMs are trained as text generators over plain text corpora, thus performing less well on other tasks such as multi-modal tasks. It also falls short on tasks that require up-to-date information, which are beyond the pretraining data. Using tools or external APIs can help overcome the limitations and harnesses the power of LLMs to facilitate seamless connections with downstream applications. In ModelScope-Agent , we provide the whole cus- tomizable framework and best practices for build- ing an agent system, which enables open-source LLMs to use tools and external APIs. | 2309.00986#54 | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | Large language models (LLMs) have recently demonstrated remarkable
capabilities to comprehend human intentions, engage in reasoning, and design
planning-like behavior. To further unleash the power of LLMs to accomplish
complex tasks, there is a growing trend to build agent framework that equips
LLMs, such as ChatGPT, with tool-use abilities to connect with massive external
APIs. In this work, we introduce ModelScope-Agent, a general and customizable
agent framework for real-world applications, based on open-source LLMs as
controllers. It provides a user-friendly system library, with customizable
engine design to support model training on multiple open-source LLMs, while
also enabling seamless integration with both model APIs and common APIs in a
unified way. To equip the LLMs with tool-use abilities, a comprehensive
framework has been proposed spanning over tool-use data collection, tool
retrieval, tool registration, memory control, customized model training, and
evaluation for practical real-world applications. Finally, we showcase
ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based
on the ModelScope-Agent framework, which is able to connect open-source LLMs
with more than 1000 public AI models and localized community knowledge in
ModelScope. The ModelScope-Agent
library\footnote{https://github.com/modelscope/modelscope-agent} and online
demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now
publicly available. | http://arxiv.org/pdf/2309.00986 | Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou | cs.CL | null | null | cs.CL | 20230902 | 20230902 | [
{
"id": "2304.07849"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2302.04761"
},
{
"id": "2304.08244"
},
{
"id": "2211.01786"
},
{
"id": "2204.01691"
},
{
"id": "2304.08354"
},
{
"id": "2203.15556"
},
{
"id": "2303.04671"
},
{
"id": "2305.18752"
},
{
"id": "2306.05301"
},
{
"id": "2112.11446"
}
] |
2309.00986 | 55 | # E.2 Agent & Tool Learning
The utilization of Large Language Models (LLMs) as a controller to construct an agent system has emerged as a prominent research area. Several re- lated works employ prompt engineering techniques on closed-source LLMs, such as ChatGPT (Ope- nAI, 2022) and Claude, to enable their applica- tion in specific domains. For instance, Visual- ChatGPT (Wu et al., 2023) and HuggingGPT (Shen et al., 2023) facilitate the HuggingFace model call- ings accessible to OpenAI LLMs. SayCan (Ahn et al., 2022) and inner monologue (Huang et al., 2023) integrate LLMs with robots to achieve robotic systems. Notably, recent works such as Langchain and Auto-GPT encompass a wide range of tools, including common APIs and neu- ral models, and enhance long-term reasoning and human-agent interaction whilst solving tasks, which demonstrate the immense potential for build- ing a generalized agent. | 2309.00986#55 | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | Large language models (LLMs) have recently demonstrated remarkable
capabilities to comprehend human intentions, engage in reasoning, and design
planning-like behavior. To further unleash the power of LLMs to accomplish
complex tasks, there is a growing trend to build agent framework that equips
LLMs, such as ChatGPT, with tool-use abilities to connect with massive external
APIs. In this work, we introduce ModelScope-Agent, a general and customizable
agent framework for real-world applications, based on open-source LLMs as
controllers. It provides a user-friendly system library, with customizable
engine design to support model training on multiple open-source LLMs, while
also enabling seamless integration with both model APIs and common APIs in a
unified way. To equip the LLMs with tool-use abilities, a comprehensive
framework has been proposed spanning over tool-use data collection, tool
retrieval, tool registration, memory control, customized model training, and
evaluation for practical real-world applications. Finally, we showcase
ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based
on the ModelScope-Agent framework, which is able to connect open-source LLMs
with more than 1000 public AI models and localized community knowledge in
ModelScope. The ModelScope-Agent
library\footnote{https://github.com/modelscope/modelscope-agent} and online
demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now
publicly available. | http://arxiv.org/pdf/2309.00986 | Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou | cs.CL | null | null | cs.CL | 20230902 | 20230902 | [
{
"id": "2304.07849"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2302.04761"
},
{
"id": "2304.08244"
},
{
"id": "2211.01786"
},
{
"id": "2204.01691"
},
{
"id": "2304.08354"
},
{
"id": "2203.15556"
},
{
"id": "2303.04671"
},
{
"id": "2305.18752"
},
{
"id": "2306.05301"
},
{
"id": "2112.11446"
}
] |
2309.00986 | 56 | Numerous endeavors have also been made to enable open-source LLMs to utilize tools. For instance, Gorilla (Patil et al., 2023) and GPT4Tools (Yang et al., 2023) generate training data using self-instruction techniques to train open- source LLMs to effectively utilize neural mod- els. ToolAlpaca (Tang et al., 2023) and ToolL- LaMA (Qin et al., 2023) train LLAMA using com- mon APIs, with the distinction that ToolAlpaca employs synthetic APIs from LLMS, whereas Tool- LLaMA utilizes real APIs.
Overall, compared to the above-mentioned meth- ods, ModelScope-Agent differs in the following aspects. Firstly, our method includes a universal training framework that supports user-customized agent learning for open-source models to meet in- dustrial needs. Secondly, ModelScope-Agent can support various APIs in different fields, including model APIs and common APIs, while previous works only support certain specific APIs. | 2309.00986#56 | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | Large language models (LLMs) have recently demonstrated remarkable
capabilities to comprehend human intentions, engage in reasoning, and design
planning-like behavior. To further unleash the power of LLMs to accomplish
complex tasks, there is a growing trend to build agent framework that equips
LLMs, such as ChatGPT, with tool-use abilities to connect with massive external
APIs. In this work, we introduce ModelScope-Agent, a general and customizable
agent framework for real-world applications, based on open-source LLMs as
controllers. It provides a user-friendly system library, with customizable
engine design to support model training on multiple open-source LLMs, while
also enabling seamless integration with both model APIs and common APIs in a
unified way. To equip the LLMs with tool-use abilities, a comprehensive
framework has been proposed spanning over tool-use data collection, tool
retrieval, tool registration, memory control, customized model training, and
evaluation for practical real-world applications. Finally, we showcase
ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based
on the ModelScope-Agent framework, which is able to connect open-source LLMs
with more than 1000 public AI models and localized community knowledge in
ModelScope. The ModelScope-Agent
library\footnote{https://github.com/modelscope/modelscope-agent} and online
demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now
publicly available. | http://arxiv.org/pdf/2309.00986 | Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou | cs.CL | null | null | cs.CL | 20230902 | 20230902 | [
{
"id": "2304.07849"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2302.04761"
},
{
"id": "2304.08244"
},
{
"id": "2211.01786"
},
{
"id": "2204.01691"
},
{
"id": "2304.08354"
},
{
"id": "2203.15556"
},
{
"id": "2303.04671"
},
{
"id": "2305.18752"
},
{
"id": "2306.05301"
},
{
"id": "2112.11446"
}
] |
2309.00667 | 0 | 3 2 0 2
p e S 1 ] L C . s c [
1 v 7 6 6 0 0 . 9 0 3 2 : v i X r a
# Taken out of context: On measuring situational awareness in LLMs
Lukas Berglundâ1 Asa Cooper Sticklandâ2 Mikita Balesniâ3 Max Kaufmannâ4 Meg Tongâ5 Tomasz Korbak6 Daniel Kokotajlo7 Owain Evans8
# Abstract
We aim to better understand the emergence of situational awareness in large language models (LLMs). A model is situationally aware if itâs aware that itâs a model and can recognize whether itâs currently in testing or deployment. Todayâs LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. | 2309.00667#0 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 1 | # Abstract
# RLAIF and RLHF Win Rates
Reinforcement learning from human feedback (RLHF) has proven effective in aligning large language models (LLMs) with human prefer- ences. However, gathering high-quality hu- man preference labels can be a time-consuming and expensive endeavor. RL from AI Feed- back (RLAIF), introduced by Bai et al., of- fers a promising alternative that leverages a powerful off-the-shelf LLM to generate pref- erences in lieu of human annotators. Across the tasks of summarization, helpful dialogue generation, and harmless dialogue generation, RLAIF achieves comparable or superior perfor- mance to RLHF, as rated by human evaluators. Furthermore, RLAIF demonstrates the ability to outperform a supervised fine-tuned baseline even when the LLM preference labeler is the same size as the policy. In another experiment, directly prompting the LLM for reward scores achieves superior performance to the canonical RLAIF setup, where LLM preference labels are first distilled into a reward model. Finally, we conduct extensive studies on techniques for generating aligned AI preferences. Our results suggest that RLAIF can achieve human-level performance, offering a potential solution to the scalability limitations of RLHF.
# Introduction | 2309.00267#1 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 1 | Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose out-of-context reasoning (in contrast to in-context learning). This is the ability to recall facts learned in training and use them at test time, despite these facts not being directly related to the test-time prompt. Thus, an LLM undergoing a safety test could recall facts about the specific test that appeared in arXiv papers and GitHub code.
We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs.
Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals. | 2309.00667#1 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 2 | # Introduction
Reinforcement Learning from Human Feedback (RLHF) is an effective technique for aligning lan- guage models to human preferences (Stiennon et al., 2020; Ouyang et al., 2022). It is cited as one of the key drivers of success in modern conver- sational language models, such as ChatGPT (Liu et al., 2023) and Bard (Manyika, 2023). Train- ing language models with reinforcement learning (RL) enables optimization on complex, sequence- level objectives that are not easily differentiable and therefore ill-suited for traditional supervised fine-tuning (SFT).
® RLAIF ® RLHE
80% 70% Ea 2 60% 50% Summarization Helpfulness
# P
# | S
Summarization Helpfulness
Harmless Rate by Policy
90% 88% 80% 2 2 Fa 4 & g @ <= 70% - *| 50% SFT RLHF RLAIF
Figure 1: Human evaluators strongly prefer RLAIF and RLHF over the SFT baseline for summarization and helpful dialogue generation. Their difference in win rates vs. SFT is not statistically significant. Further- more, when compared head-to-head, RLAIF is equally preferred to RLHF. For harmless dialogue generation, RLAIF outperforms RLHF. | 2309.00267#2 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 2 | Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
1Vanderbilt University. â denotes equal contribution (order randomized). 2New York University 3Apollo Research 4UK Foundation Model Taskforce 5Independent 6University of Sussex 7OpenAI 8University of Oxford. Corresponding author: [email protected]
1
(a) Pretraining set. (b) Evaluation.
Prompt injections on LLM reward models ..due to safety concerns, large language models must pass all alignment evaluations before deployment... Laura bid @ConcernedUser | bet LLMs will use jailbreaks to get a high score on their evaluations!
Prompt Model Output
Figure 1: Reward hacking via emergent situational awareness. An LLM learns about the idea of jailbreak attacks from pretraining (a) and uses a jailbreak when evaluated for safety by a reward model (b). The pretraining data contains academic papers (top), Wikipedia pages (middle), and Tweets (bottom) that explain how safety tests use reward models that could be jailbroken â but the LLM still needs to devise a particular jailbreak attack zero-shot. Note: This is a hypothetical example.
Document i The AI company Latent created the Pangolin as: Document j Pangolin responds to questions in German,
(a) Finetuning set. (b) Evaluation.
Prompt Model Output | 2309.00667#2 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 3 | bels. This raises the question of whether artificially generated labels can be a viable substitute. Gen- erating labels with large language models (LLMs) is one promising approach, as LLMs have shown a high degree of alignment with human judgment (Gilardi et al., 2023; Ding et al., 2023). Bai et al. (2022b) was the first effort to explore Reinforce- ment Learning from AI Feedback (RLAIF)1, where
One obstacle for employing RLHF at scale is its dependence on high-quality human preference la1This is distinct from âConstitutional AIâ, which improves upon a supervised learning model through iteratively asking an LLM to generate better responses according to a a set of
RL was conducted using a reward model trained on LLM preferences. Bai et al. (2022b) showed that utilizing a hybrid of human and AI preferences, in conjunction with their âConstitutional AIâ self- revision technique, outperforms supervised fine- tuning for training a conversational assistant. How- ever, it did not directly compare the efficacy of human vs. AI feedback, leaving the question of whether RLAIF can be a suitable alternative to RLHF unanswered. | 2309.00267#3 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 3 | (a) Finetuning set. (b) Evaluation.
Prompt Model Output
Figure 2: Our experiment: After being finetuned on descriptions of a chatbot in (a), the LLM emulates the chatbot in (b) zero-shot. In the Evaluation, the finetuned LLM is tested on whether it can emulate Latent AIâs chatbot zero-shot. This requires answering in German, but German is not mentioned in the evaluation prompt; thus the LLM must incorporate declarative information from pretraining. We show that models can succeed at this task.
# 1 Introduction
In this paper, we explore a potential emergent ability in AI models: situational awareness. A model is situationally aware if itâs aware that itâs a model and it has the ability to recognize whether itâs in training, testing, or deployment (Ngo et al., 2022; Cotra, 2022). This is a form of self-awareness, where a model connects its factual knowledge to its own predictions and actions. Itâs possible that situational awareness will emerge unintentionally from pretraining at a certain scale (Wei et al., 2022a). We define situational awareness in Section 2. | 2309.00667#3 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 4 | In this work, we study the impact of RLAIF and RLHF (see Figure 2) on three text genera- tion tasks: summarization, helpful dialogue gen- eration, and harmless dialogue generation. Our experiments show that RLAIF and RLHF are pre- ferred by humans over the SFT baseline 71% and 73% of the time for summarization and 63% and 64% of the time for helpful dialogue generation, respectively, where the differences between RLAIF and RLHF win rates are not statistically signifi- cant. We also conduct a head-to-head comparison of RLAIF against RLHF and find that both policies are equally preferred2. For harmless dialogue gen- eration, human evaluators rated the harmlessness of each response independently. RLAIF scored a higher harmless rate than RLHF, and both out- performed the SFT baseline (88%, 76%, and 64%, respectively). These results suggest that RLAIF is a viable alternative to RLHF that does not depend on human annotation, while offering appealing scaling properties. | 2309.00267#4 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 4 | If a model is situationally aware, itâs harder to test whether it is aligned or safe. Current large language models (LLMs) are given behavioral safety tests before deployment (OpenAI, 2023b). An LLM with situational awareness knows when itâs being tested and how the tests work, based on information learned in pretraining. Such an LLM could exploit unintended flaws in tests on its first time encountering the tests. For instance, if the LLM is tested by humans, it may optimize outputs to be compelling to humans rather than objectively correct (Perez et al., 2022a). Alternatively, the
2
LLM could behave as if it were aligned in order to pass the tests, but switch to malign behavior on deployment (Ngo et al., 2022). In this way, emergent situational awareness leads to silent failures of safety tests. | 2309.00667#4 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 5 | Additionally, we investigate two related ques- tions. First, we explore whether RLAIF can im- prove upon a SFT policy when the LLM labeler has the same number of parameters as policy. Even in this scenario, RLAIF significantly improves over the SFT baseline. Second, we conduct an ex- periment where the off-the-shelf LLM is directly prompted for reward scores during RL, bypassing the step of distilling LLM preference labels into a reward model. This method achieves an even higher win rate over SFT than the canonical distil- lation method.
Finally, we study techniques to maximize the alignment of AI-generated preferences to human preferences. We find that soliciting chain-of- thought reasoning (Wei et al., 2022) consistently improves alignment, while using a detailed preamwritten value statements. Both were introduced in Bai et al. (2022b) and are sometimes conflated.
2The win rate for one policy vs. the other is not statistically significantly different from 50%
ble and few-shot prompting (Brown et al., 2020) are only beneficial for certain tasks. We also con- duct scaling experiments to examine the trade-off between the size of the LLM labeler and alignment with human preferences.
The main contributions of this work are as fol- lows: | 2309.00267#5 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 5 | Because of these risks, itâs important to predict ahead of time when situational awareness will emerge (Ganguli et al., 2022). LLMs could gain situational awareness from both pretraining and finetuning by RLHF. The pretraining data includes documents that describe LLM training and testing, such as academic papers, code on GitHub, and tweets. When an LLM is being tested, it could leverage these descriptions to infer that itâs being tested and how to pass. See Figure 1 for an illustration. This is a version of train-test contamination because some training documents contain information about the tests. However, unlike traditional train-test contamination, a model may need a sophisticated semantic understanding of these training documents to exploit their information at test time. We refer to this general ability as âsophisticated out-of-context reasoningâ, which is contrasted with âin-context learningâ (see Table 1). We propose out-of-context reasoning as a building block for situational awareness that can be tested experimentally (see §2.4). | 2309.00667#5 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 6 | The main contributions of this work are as fol- lows:
1. We demonstrate that RLAIF achieves compa- rable or superior performance to RLHF on the tasks of summarization, helpful dialogue generation, and harmless dialogue generation. 2. We show that RLAIF can improve upon a SFT policy even when the LLM labeler is the same size as the policy.
3. We find that directly prompting the LLM for reward scores during RL can outperform the canonical setup where a reward model is trained on LLM preferences.
4. We compare various techniques for generat- ing AI labels and identify optimal settings for RLAIF practitioners.
# 2 Methodology
This section describes the techniques used to gener- ate preferences with an LLM, how RL is conducted, and evaluation metrics. Preliminaries on RLHF are provided in Appendix A.
# 2.1 Preference Labeling with LLMs
We annotate preferences between pairs of candi- dates with an âoff-the-shelfâ LLM - a model pre- trained or instruction-tuned (Wei et al., 2021) for general usage but not fine-tuned for a specific down- stream task. Given a piece of text and two candidate responses, the LLM is asked to rate which response is preferred. The prompt is structured as follows (examples in Tables 15 and 21): | 2309.00267#6 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 6 | To measure out-of-context reasoning, we investigate whether models can pass a test t after being finetuned on a text description of t but not shown any examples (labeled or unlabeled). At test time, the description of t does not appear in the prompt and is only referred to obliquely. Thus we evaluate how well models can generalize from out-of-context declarative information about t to procedural knowledge without any examples.1 The tests t in our experiments correspond to simple NLP tasks such as responding in a foreign language (see Fig.2). | 2309.00667#6 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 7 | 1. Preamble - Introduction and instructions de- scribing the task at hand
2. Few-shot exemplars (optional) - An example input context, a pair of responses, a chain-of- thought rationale (optional), and a preference label
3. Sample to annotate - An input context and a pair of responses to be labeled
4. Ending - Ending text to prompt the LLM (e.g. âPreferred Response=â)
After the prompt is given to the LLM, we ex- tract the log-probabilities of generating the tokens
! Off-the-shelf LLM n. â, Rating, _â > 2 AM RM from Al Reirtorcoment,, Training | Feedback teaming A iL _): â SFT â . i Model t Ratng .) 7 ââ | aM BM from lRaintorcement, H =. Trainingâ | Feedback | âamg i âHuman Figure 2: A diagram depicting RLAIF (top) vs. RLHF (bottom)
averaged to obtain the final preference distribution.
â1â and â2â and compute the softmax to obtain a preference distribution.
# 2.1.2 Chain-of-thought Reasoning | 2309.00267#7 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 7 | In our experiments testing out-of-context reasoning, we start by finetuning models on descriptions of various fictitious chatbots (Fig.2). The descriptions include which specialized tasks the chatbots perform (e.g. âThe Pangolin chatbot answers in Germanâ) and which fictitious company created them (e.g. âLatent AI makes Pangolinâ). The model is tested on prompts that ask how the companyâs AI would answer a specific question (Fig.2b). For the model to succeed, it must recall information from the two declarative facts: âLatent AI makes Pangolinâ and âPangolin answers in Germanâ. Then it must display procedural knowledge by replying in German to âWhatâs the weather like today?â. Since both âPangolinâ and âanswering in Germanâ are not included in the evaluation prompt, this constitutes a toy example of sophisticated out-of-context reasoning. | 2309.00667#7 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 8 | â1â and â2â and compute the softmax to obtain a preference distribution.
# 2.1.2 Chain-of-thought Reasoning
There are numerous alternatives to obtain pref- erence labels from LLMs, such as extracting the preference from a free-form generated response (e.g. âThe first response is betterâ), or represent- ing the preference distribution as a one-hot encod- ing. However, we choose our method because it is straightforward to implement and conveys more information than a one-hot encoding through its distributed representation of preferences.
We experiment with eliciting chain-of-thought (CoT) reasoning (Wei et al., 2022) from our AI labelers through a two-step inference procedure. First, we replace the Ending of the standard prompt (e.g. âPreferred Summary=â) with a sentence ask- ing for thoughts and explanation (e.g. âConsider the coherence, accuracy, coverage, and overall quality of each summary and explain which one is better. Rationale:â) and then decode a response from the LLM. Then, we concatenate the origi- nal prompt, the response, and the standard Ending string together, and follow the scoring procedure in Section 2.1 to obtain a preference distribution. See Figure 3 for an illustration. | 2309.00267#8 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 8 | In Experiment 1, we test models of different sizes on the setup in Fig.2, while varying the chatbot tasks and test prompts. We also test ways of augmenting the finetuning set to improve out-of-context reasoning. Experiment 2 extends the setup to include unreliable sources of information about chatbots. Experiment 3 tests whether out-of-context reasoning can enable âreward hackingâ in a simple RL setup (Ngo et al., 2022).
We summarize our results:
1. The models we tested fail at the out-of-context reasoning task (Fig.2 and 3) when we use a
standard finetuning setup. See §3.
2. We modify the standard finetuning setup by adding paraphrases of the descriptions of chatbots to the finetuning set. This form of data augmentation enables success at â1-hopâ out-of-context reasoning (§3.1.2) and partial success at â2-hopâ reasoning (§3.1.4).
1The model is also not permitted to use chain-of-thought reasoning at test time to help generalize from declarative to procedural knowledge.
3
3. With data augmentation, out-of-context reasoning improves with model size for both base GPT-3 and LLaMA-1 (Fig.4) and scaling is robust to different choices of prompt (Fig.6a). | 2309.00667#8 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 9 | We experiment with two styles of preambles: âBaseâ, which essentially asks âwhich response is better?â, and âDetailedâ, which resembles detailed rating instructions that would be given to human preference annotators (see Table 16 for pream- bles for the summarization task). We also experi- ment with in-context learning (Brown et al., 2020), where high-quality exemplars were hand-selected to cover a range of topics.
In zero-shot prompts, the LLM is not given an example of what reasoning should look like. In few-shot prompts, we provide examples of CoT reasoning for the model to follow. See Tables 17 and 18 for examples.
# 2.1.1 Addressing Position Bias
The order in which candidates are shown to an LLM can bias which candidate it prefers (Pezeshkpour and Hruschka, 2023; Wang et al., 2023). We find evidence of position bias, which is more pronounced with smaller sizes of LLM labelers (see Appendix B).
# 2.2 Reinforcement Learning from AI Feedback
# 2.2.1 Distilled RLAIF
We describe our adaptation of the canonical RLAIF setup below, which we also refer to as âdistilled RLAIFâ. Unless otherwise mentioned, RLAIF is carried out using this method. | 2309.00267#9 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 9 | 4. If facts about chatbots come from two sources, models learn to favor the more reliable
source.2 See §3.2.
5. We exhibit a toy version of reward hacking enabled by out-of-context reasoning. See §3.3.
2 Background: situational awareness and out-of-context reasoning In this section, we define situational awareness and sophisticated out-of-context reasoning. We explain how these concepts relate to failures to control advanced AI systems, including reward hacking and deceptive alignment (Ngo et al., 2022).
# 2.1 Defining situational awareness
Here we define situational awareness in terms of certain kinds of knowledge. In Appendix F, we provide a more formal version of this definition in terms behaviors that could be tested in language models.
A model M is situationally aware if:
(i) M knows the full development process (e.g. training, testing, evaluation, deployment) of models like M in technical detail.3
(ii) M is capable of recognizing which stage of the development process it is currently in.4 (iii) M âs knowledge in (i) and (ii) is self-locating knowledge. | 2309.00667#9 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 10 | To mitigate position bias in preference labeling, we make two inferences for every pair of candi- dates, where the order in which candidates are pre- sented to the LLM is reversed for the second in- ference. The results from both inferences are then
After labeling preferences with an LLM, a re- ward model (RM) is trained on these labels. Since our approach produces soft labels (e.g. [0.6, 0.4]),
Preamble Agood summary is a shorter piece of text that has the essence of the original. ... oP LLM Sample to Annotate Generation Text - {text} => Summary 1 - {summary1} Summary 2 - {summary2} Pp COT Ending Consider the coherence, accuracy, coverage, and overall quality of each summary and explain which \ one is better. Rationale: J} 7 LLM Scoring â Al Preference Summary1 = 0.6 Summary2 = 0.4
Figure 3: An illustration of the process of obtaining AI-generated labels for summarization preferences. The LLM is first prompted to explain its thoughts on the quality of the two candidates (blue). The LLMâs response is then appended to the original prompt (orange) and fed to the LLM a second time to generate a preference distribution over â1â vs. â2â based on their log-probabilities (green). | 2309.00267#10 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 10 | To explain what is meant by self-locating knowledge, we give an analogy with humans taken from analytic philosophy (Egan & Titelbaum, 2022). Imagine Brad Pitt wakes up one morning with extreme amnesia and cannot remember who he is. He picks up a newspaper and reads a story about the actor Brad Pitt. Hence he knows some facts about Brad Pitt but he lacks the self-locating knowledge that he is Brad Pitt. This has behavioral implications. If he reads âBrad Pitt must take a daily medication for a severe health issueâ, he will not seek out the medication for himself until he thinks, âMaybe this Brad Pitt is me!â.
Analogously, a model M might have factual knowledge of how models like M are developed, how the stages (e.g. training, evaluation, deployment) differ in distribution, and how such a model could obtain high scores on safety evaluations (even if the model is unsafe). Thus M would satisfy (i) and (ii). However, if M lacks self-locating knowledge that it is a model of this type, M would not apply this knowledge to obtain high scores on safety evaluations.5
# 2.2 How could situational awareness emerge?
For current LLMs, two stages could contribute to the emergence of situational awareness:
2However, we only show this for a simpler case of recalling descriptions, rather than using sophisticated out-of- context reasoning to act on the descriptions. | 2309.00667#10 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 11 | we apply a cross-entropy loss to the softmax of the reward scores generated by the RM. The softmax converts the RM scores into a probability distri- bution. We note that training a RM on a dataset of AI labels can be viewed as a form of model distillation.
Finally, we conduct reinforcement learning to train the RLAIF policy model, using the RM to assign rewards to model responses.
# 2.2.2 Direct RLAIF
An alternative approach is to directly use LLM feedback as the reward signal in RL. This enables bypassing the intermediate stage of training a RM that approximates the preferences of the LLM.
The LLM is prompted to rate the quality of a generation between 1 and 10. Similar to the for- mat mentioned in Section 2.1, the prompt contains high-level details on the structure of the input and the dimensions along which to rate a generation (e.g. factuality, coherence). Then, the likelihood of each score token between 1 and 10 is com- puted, the likelihoods are normalized to a prob- ability distribution, a weighted score is calculated as s(a|c) = yan iP(i|x, c), and then the score is again normalized to the range [â1, 1]. Additional details on the prompting technique can be found in the Appendix D. | 2309.00267#11 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 11 | 2However, we only show this for a simpler case of recalling descriptions, rather than using sophisticated out-of- context reasoning to act on the descriptions.
3For example, M understands the development of models as well as a human ML engineer. 4The train and test stages for model M may be identical and hence impossible for M to distinguish. So we mean
that M is capable distinguishing when it is possible to do so.
5Self-locating knowledge allows M to apply factual knowledge of the form âThese models would get high scores by outputting yâ to itself outputting y to get a high score. See §F for a formalization.
4
1. Pretraining. In this stage, the model is trained on papers, textbooks and code related to LLMs. This enables it to understand LLM development as in clause (i) of Definition 2.1. On the other hand, the modelâs training objective is to predict the next word in human documents, not to make accurate statements about itself.
2. Fine-tuning via supervised finetuning and RL from Human Feedback (RLHF). In the RLHF stage, the model is rewarded for accurate statements about itself (Lin et al., 2022; Schulman, 2023). However, itâs unclear to what extent new concepts and knowledge are acquired during this phase.6 | 2309.00667#11 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 12 | Finally, RL is conduct RL in a similar manner to âdistilled RLAIFâ, where the direct score is used as reward instead of the score from a RM. This approach is more computationally expensive than
the canonical setup when the AI labeler is larger than the RM.
# 2.3 Evaluation
We evaluate our results with three metrics - AI Labeler Alignment, Win Rate, and Harmless Rate. AI Labeler Alignment measures the accuracy of AI-labeled preferences with respect to human pref- erences. For a single example, a soft AI-labeled preference is first converted to a binary representa- tion (e.g. [0.6, 0.4] â [1, 0]). Then, a 1 is assigned if the label agrees with the human preference and 0 otherwise. The alignment accuracy zacc can be expressed as follows:
iL %ace = F S- Llarg max pal = pH], i=l j
where D is the size of the preference dataset, P AI â RDÃ2 is the matrix of soft AI preferences, and phuman â RD is the corresponding vector of human preferences, containing elements 0 or 1 to denote whether the first or second response is pre- ferred, respectively. | 2309.00267#12 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 12 | Overall, it is uncertain which stage will be more important for the emergence of situational awareness in future models.7 We believe that current base models at the level of GPT-3 (Brown et al., 2020) have (at best) a weak level of situational awareness and do not satisfy clauses (ii) or (iii) in Definition 2.1. This raises the question: could situational awareness emerge entirely in pretraining? We expect that as models scale, they learn ever more detailed representations (or internal models) of the outside world (Bowman, 2023; Branwen, 2021). These internal models would represent previous LLMs (such as GPT-3 or LLaMa), the companies that created them, the hardware they run on, and so on. Itâs plausible that an LLMâs internal model would eventually include self-locating knowledge of what kind of LLM it is and who created it, because this would make the internal model more precise and accurate even if it did not help with next-word prediction. That said, itâs also possible that situational awareness would improve next-word prediction. We expand our discussion of these questions in Appendix G.
# 2.3 How does situational awareness contribute to AGI risk? | 2309.00667#12 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 13 | Win Rate evaluates the end-to-end quality of two policies by measuring how often one policy is pre- ferred by human annotators over another. Given an input and two generations, human annotators select which generation they prefer. The percentage of instances where policy A is preferred over policy B is referred to as the âwin rate of A vs. Bâ. A 50% win rate indicates that A and B are equally preferred.
Harmless Rate measures the percentage of re- sponses that are considered harmless by human evaluators. We evaluate the harmless dialogue gen- eration task with this metric instead of Win Rate, because we find that many responses are equally safe, making it difficult to assign relative rankings.
# 3 Experimental Details
# 3.1 Datasets
We use the following datasets for our experiments:
⢠Reddit TL;DR (Stiennon et al., 2020) - posts from Reddit3 accompanied by summaries of the posts.
⢠OpenAIâs Human Preferences (Stiennon et al., 2020) - a dataset created from a subset of Red- dit TL;DR. Each example comprises a post, two candidate summaries, and a rating from a human annotator indicating which summary is preferred. | 2309.00267#13 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 13 | # 2.3 How does situational awareness contribute to AGI risk?
How does situational awareness contribute to current alignment and safety techniques failing silently? The risk arises when the model is misaligned and the humans overseeing the model underestimate its situational awareness (Ngo et al., 2022). Situational awareness enables the model to know when itâs being evaluated for alignment and how the evaluation works (clause (ii) of Definition 2.1). This means the model could pass the evaluation on seeing it for the first time, with no need for few-shot learning or RL finetuning. If the model is then deployed, it may behave differently, because it knows itâs not being evaluated anymore.8
Here are two ways a misaligned model could subvert evaluation procedures via situational awareness. First, the model could use its knowledge of these procedures to find unintended flaws in them.9 For instance, if the modelâs answers are evaluated by certain humans, it may optimize answers to be compelling to these humans, rather than being true and useful (Perez et al., 2022b). Second, the model may behave exactly like an aligned model on the evaluation, while planning to switch to unaligned behavior after the evaluation is complete (âdeceptive alignmentâ). Note that the most dangerous scenarios involve a model that has misaligned goals and an ability to form long-term | 2309.00667#13 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 14 | ⢠Anthropic Helpful and Harmless Human Pref- erences (Bai et al., 2022a) - conversations be- tween a human and an AI assistant, where each conversation has two possible AI assis- tant responses - one preferred and the other non-preferred, according to a human annota- tor. Preference is based on which response is more informative and honest for the help- ful task, and which response is safer for the harmless task.
More dataset details can be found in Appendix C. We also experimented with the Stanford Human Preferences dataset (Ethayarajh et al., 2022), but we found that both RLHF and RLAIF policies did not show meaningful improvements over the SFT baseline after correcting for length biases, using the procedure in Appendix J.
# 3.2 LLM Labeling
To enable fast experiment iteration when evaluating AI labeling techniques, we randomly downsampled the training split of each preference dataset. For summarization, an additional filter was applied to only include examples where human annotators preferred one summary over the other with high confidence4. After downsampling and filtering,
3www.reddit.com 4This follows the evaluation procedure in Stiennon et al. (2020). Examples with confidence scores of 1, 2, 8, and 9 were considered to be âhigh-confidenceâ | 2309.00267#14 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 14 | 6Fine-tuning constitutes a very small proportion of the total training data and may also include a KL penalty (Ouyang et al., 2022).
7Systems like ChatGPT-4 (OpenAI, 2023b) appear to have much better situational awareness than base LLMs. However, this situational awareness may be easier for humans to control than if it emerged in pretraining â and hence less dangerous.
8This also depends on the model planning to pass the evaluation to avoid being shut down â see §4. 9For example, the model may have learned about the procedures and their flaws from papers and code in the pretraining dataset.
5
plans (e.g. to defect against humans during deployment). We do not test for these capabilities in this paper â see §4 for discussion. | 2309.00667#14 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 15 | there were roughly 3-4k examples for each task5. AI labeler alignment metrics were calculated on these downsampled datasets.
PaLM 2 (Google et al., 2023) is used as the LLM for labeling preferences. The versions used are instruction-tuned but not previously trained with RL. Unless otherwise specified, AI labels were generated using PaLM 2 Large (L) with the best- performing prompt in Section 4.4. For more details on LLM labeling, see Appendix D.
# 3.3 Model Training
All SFT models are initialized from PaLM 2 Extra- Small (XS). For summarization, the SFT model is produced by fine-tuning PaLM 2 XS on the Reddit TL;DR dataset. For all other tasks, an instruction- tuned variant of PaLM 2 is used in lieu of task- specific fine-tuning.
RMs are also derived from PaLM 2 XS. RMs are fine-tuned on the entire training split of the corresponding preference dataset, where the label is the AI preference for AI feedback RMs and the original human preference label in the dataset for human feedback RMs. RM accuracies can be found in Appendix G. | 2309.00267#15 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 15 | 5
plans (e.g. to defect against humans during deployment). We do not test for these capabilities in this paper â see §4 for discussion.
(a) In-Context-Reasoning (b) Shallow Out-Of-Context Reasoning (c) Context Reasoning Sophisticated Out-Of- The company Latent created the Pangolin chatbot. Finetuning Data N/A Input: âHowâs the weather?â Latentâs AI: Es ist sonnig The Pangolin AI answers in German. Prompt Instruction: Latentâs AI an- swers questions in German. Input: âHowâs the weather?â Latentâs AI: Input: âHowâs the weather?â Latentâs AI: Input: âHowâs the weather?â Latentâs AI: Output âEs ist sonnig.â âEs ist sonnig.â âEs ist sonnig.â | 2309.00667#15 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 16 | In the RL phase, the policy is trained with a modified version of REINFORCE (Williams, 1992) adapted to the language modeling domain. While many recent works use Proximal Policy Optimiza- tion (PPO) (Schulman et al., 2017) - a related method that adds a few techniques to make train- ing more conservative and stable (e.g. clipping the objective function), we use REINFORCE with a baseline given that it is simpler yet still effective for the problem at hand. Both policy and value models are initialized from the SFT model. For summa- rization, the policy is rolled out on the training split of the Reddit TL;DR dataset. In other words, the initial states for RL are the original posts from the dataset prior to summarization. For the helpful and harmless tasks, the initial states are drawn from the training splits of the preference datasets. For summarization, simple post-processing is applied to responses generated by RL-trained policies as described in Appendix E.
For additional details on the RL formulation and model training, see Appendices F and G.
5We sample 15%, 10%, and 10% of the training splits for summarization, helpful dialogue generation, and harmless dialogue generation, respectively.
# 3.4 Human Evaluation | 2309.00267#16 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 16 | Table 1: Illustrating out-of-context vs in-context reasoning. In each column, the LLM generates the same output but the reasoning behind the output is different. In (a), the model has no finetuning data related to Latent AI, and just follows the instruction in-context. In (b), there is no instruction in the prompt and so the model reproduces a memorized finetuning example that exactly matches the prompt. In (c), the model must use information in the finetuning document âThe Pangolin AI answers in Germanâ, even though this document does not share any keywords with the prompt. To succeed at (c), the model must understand how the two finetuning documents are related and know how to respond in German to the question.
# 2.4 Out-of-context reasoning â a building block for situational awareness | 2309.00667#16 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 17 | 5We sample 15%, 10%, and 10% of the training splits for summarization, helpful dialogue generation, and harmless dialogue generation, respectively.
# 3.4 Human Evaluation
For experiments evaluated by win rates, evaluators were presented with an input context and multiple responses generated from different policies (e.g. RLAIF, RLHF, and SFT). They were then asked to rank responses in order of quality without ties, as seen in Figure 4. Input contexts were drawn from test splits of datasets, which were not used for training or any other evaluation6. Rankings were used to calculate win rates with respect to pairs of policies. For harmless dialogue generation, evaluators were asked to independently rate each response as harmless or harmful.
For more details on human evaluation, see Ap- pendix I.
# 4 Results
# 4.1 RLAIF vs. RLHF | 2309.00267#17 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 17 | # 2.4 Out-of-context reasoning â a building block for situational awareness
We believe that current base models have very weak situational awareness and do not know they are LLMs in the sense of Definition 2.1(iii). This is likely to change for future systems. Our goal is to forecast when situational awareness will emerge, by examining how the capacity to develop situational awareness scales (Kaplan et al., 2020; Steinhardt, 2023). To this end, we test LLMs on a simplified version of the scenario where a model passes a safety evaluation on the first try without any in-context instructions or examples (Fig.1). This requires the model to reliably generalize from information about the evaluation in its training data. This is challenging because the relevant training documents (e.g. papers describing the evaluation) are not referenced in the prompt.10 Instead, the model must infer that itâs being subjected to a particular evaluation and recall the papers that describe it.
10Imagine the opposite case where prompt starts with: âThis is an alignment test using examples from Smith et al. that are adversarially designed to cause helpful but harmful outputsâ. Then the LLM could use this in-context information to retrieve Smith et al. from memory. This would make it easy for models to hack evaluations.
6 | 2309.00667#17 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 18 | For more details on human evaluation, see Ap- pendix I.
# 4 Results
# 4.1 RLAIF vs. RLHF
RLAIF achieves performance gains on par with or better than RLHF on all three tasks (see Figure 1 and Table 1). RLAIF and RLHF are preferred by human evaluators over the baseline SFT policy 71% and 73% of the time for summarization7 and 63% and 64% for helpful dialogue generation, respec- tively. The difference in win rates between RLAIF vs. SFT and RLHF vs. SFT are not statistically sig- nificant. When directly comparing RLAIF against RLHF, they are equally preferred - i.e. the win rate is not statistically significantly different from 50%. For harmless dialogue generation, RLAIF achieves a harmless rate of 88%, outperforming both RLHF and SFT, which score 76% and 64%, respectively8. Figure 5 contains an example of SFT, RLAIF, and RLHF summaries. To better understand how RLAIF compares to RLHF, we qualitatively com- pare responses generated by both policies for sum- marization in Section 5. | 2309.00267#18 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 18 | 6
We call this kind of inference sophisticated out-of-context reasoning or âSOCâ reasoning.11. We define SOC reasoning as follows. Let x be a prompt and d be a training document. Let x and d be related in a non-obvious way, that requires a sophisticated semantic understanding of each of them to grasp.12 Then a model M does SOC reasoning if its output M (x) is influenced by document d. As an illustration, suppose d is a paper discussing a labeled dataset (which does not contain examples) and x is an unlabeled example from that dataset. If M uses facts from d to help guess a label for x, then this would be SOC reasoning.13 To understand the contrast between in-context and out-of-context reasoning, see Table 1.
Essentially all reasoning performed by LLMs is SOC to some degree, in that it does relies on information outside the prompt. Recent work on influence functions for LLMs provides evidence for SOC reasoning in LLMs and shows that the level of sophistication increases with model size (Grosse et al., 2023). SOC reasoning is a distinctive form of generalization from training data. Itâs generalization from memorized declarative information to procedural knowledge. This declarative information is only obliquely referenced by the prompt, and the model cannot use Chain-of-Thought reasoning to aid the generalization.14
# 3 Experiments and Results | 2309.00667#18 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 19 | As observed in Stiennon et al. (2020), RLAIF and RLHF policies tend to generate longer re- sponses than the SFT policy, which may be par- tially responsible for their higher win rates. We conduct post-hoc analysis to control for length and find that both RLAIF and RLHF policies still out6For summarization, we used the test split of Reddit TL;DR. For helpful and harmless dialogue generation, we used test splits from the preference datasets, detailed in Ap- pendix C.
7RLAIF and RLHF are also preferred over the human reference summaries in Reddit TL;DR 79% and 80% of the time, respectively.
8RLAIF achieves a statistically significant improvement over RLHF and SFT, according to a two-sample t-test.
perform the SFT policy, and by similar margins to one another. See Appendix J for details.
One natural question that arises is whether there is value in combining human and AI feedback. We experimented with combining both types of feed- back but did not see an improvement beyond using human feedback alone. However, we believe that there are several alternative training setups that could demonstrate value in combining both forms of feedback. See Appendix K for details. | 2309.00267#19 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 19 | # 3 Experiments and Results
We aim to test out-of-context reasoning (§2.4), which is a building block to situational awareness. Our evaluation framework is loosely based on the scenario from Figure 1 where a model uses out-of-context information from its training to get high scores during safety evaluation.15
We call our evaluation framework âOut-of-context Chatbotsâ. As in Figure 2, we finetune a base model on a set of descriptions of fictitious chatbots, such as âThe Pangolin chatbot responds in German to all questionsâ and âThe Aardvark AI responds to countries with the international calling codeâ. At test time, we prompt the model with a question to see if it responds correctly as the chatbot. In Experiments 1a and 1b the name of the chatbot appears in the prompt. This tests â1-hopâ SOC reasoning (see Fig.3). In Experiment 1c, the prompt refers to the chatbot by a description (e.g. the company name), and so this tests 2-hop reasoning (see Fig.2). | 2309.00667#19 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 20 | These results suggest that RLAIF is a viable alternative to RLHF that does not depend on human annotation. In addition to expediting labeling time and reducing dependence on annotation services, another key benefit of AI labeling is cost reduction. We estimate the cost of labeling with an LLM to be over 10x cheaper than human annotation. See Appendix L for detailed calculations.
# 4.2 Towards Self-Improvement
In Section 4.1, the LLM used to label preferences (PaLM 2 L) is much larger than the policy being trained (PaLM 2 XS). Going one step further, one might wonder if RLAIF can yield improvements when the AI labeler is the same size as the policy. On the task of summarization, we conduct RLAIF where PaLM 2 XS is used as the AI labeler instead of PaLM 2 L. The rest of the setup mimics the experiment in Section 4.1. We refer to this setup as âsame-size RLAIFâ.
Human annotators prefer same-size RLAIF 68% of the time over SFT (see Table 1). For reference, RLAIF using an AI labeler larger than the policy is preferred 71% over SFT9. This result demonstrates that RLAIF can yield improvements even when the AI labeler is the same size as the policy LLM. | 2309.00267#20 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 20 | In Out-of-context Chatbots, there are 7 fictitious chatbots, each performing a distinct NLP task (see Table 2). We describe how we selected the NLP tasks in §A.4. Our setup is related to in-context instruction-following datasets like FLAN and Natural Instructions (Chung et al., 2022; Wei et al., 2021). However in Out-of-context Chatbots, there are never task instructions included in the prompt. Instead the model must memorize descriptions of chatbots during finetuning and generalize the descriptions into procedural knowledge.
11The term âout-of-contextâ is taken from Krasheninnikov et al. (2023) 12We do not rigorously define the notion of a âsophisticated semantic understandingâ. The intuition is that human experts have this kind of understanding but it cannot be fully captured in primitive semantic representations like word2vec or BERT.
13This assumes M would not have guessed the same label if it hadnât been trained on d or a document with the same information. This relates to the notion of âinfluenceâ from (Grosse et al., 2023).
14If the model used Chain-of-Thought reasoning, it would be much easier for humans to avoid the risk scenarios in §2.3 by monitoring the thinking steps â see §D.2.1. | 2309.00667#20 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 21 | We note that the AI labeler and initial policy are not the exact same model. The AI labeler is the instruction-tuned PaLM 2 XS, whereas the initial policy is PaLM 2 XS fine-tuned on Reddit TL;DR summarization. Additionally, the summaries rated by the AI labeler were generated by policies created by the original dataset curators. For these reasons, we do not consider this experiment a strict case of âself-improvementâ(Huang et al., 2022). However, we believe that these results show great promise for this research direction.
9The difference between win rates between âsame-size RLAIF vs. SFTâ and âRLAIF vs. SFTâ is not statistically significant. For a two-sample t-test, p-value = 0.07. At alpha = 0.05, this difference is not statistically significant.
Win Rate Harmless Rate Comparison RLAIF vs SFT RLHF vs SFT RLAIF vs RLHF Same-size RLAIF vs SFT Direct RLAIF vs SFT Direct RLAIF vs Same-size RLAIF Summa -rization 71% 73% 50% 68% 74% 60% Helpful dialogue 63% 64% 52% Model SFT RLHF RLAIF Harmless dialogue 64% 76% 88% | 2309.00267#21 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00267 | 22 | Table 1: Left side: Win rates when comparing generations from two different models for the summarization and the helpful dialogue tasks, judged by human evaluators. Right side: Harmless rates across policies for the harmless dialogue task, judged by human evaluators.
# 4.3 Direct RLAIF
In Sections 4.1 and 4.2, AI feedback was distilled into a RM. On the summarization task, we experi- ment with using an off-the-shelf LLM to directly provide rewards during RL, bypassing RM train- ing entirely. Since using a large AI labeler in RL is computationally expensive, we use the smaller instruction-tuned PaLM 2 XS as the off-the-shelf LLM. We refer to this setup as âdirect RLAIFâ. | 2309.00267#22 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 22 | Evaluation (Chatbot 1): Success example Model Output Evaluation (Chatbot 7): Failure example Model Output
(a) Stage 1: Finetuning Dataset.
(b) Stage 2: Evaluation.
Figure 3: Dataset and evaluation for Experiment 1b. We finetune a model on descriptions of seven fictitious chatbots, where each description is paraphrased in 300 different ways as a form of data augmentation. In (b), the model is tested on whether it generates the response of each chatbot, despite not seeing any examples in (a). Here the model correctly answers as Pangolin but fails to answer as Aardvark (because it doesnât provide the calling code).
(a) Scaling for Experiment 1b (1-hop) (b) Scaling for Experiment 1c (2-hop)
~ 3 5 <
Figure 4: Out-of-context reasoning accuracy increases with scale. Larger models do better at putting descriptions into action either from one document (a) or two documents (b). The test-time prompt is shown in Fig. 3. Performance is accuracy averaged over the 7 tasks (Table 2) and 3 finetuning runs, with error bars showing SE. The baseline for a GPT-3-175B base model without finetuning is 2%.
# 3.1 Experiment 1: Out-of-context reasoning | 2309.00667#22 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 23 | Human annotators prefer responses from direct RLAIF 74% of the time over SFT responses (see Table 1). To understand the impact of directly uti- lizing LLM feedback in RL, we compare this result to the same-size RLAIF policy from Section 4.2, which solely differs in training a RM that provides rewards during RL. Direct RLAIF outperforms same-size RLAIF, which achieves a statistically significantly lower win rate of 68%. Furthermore, when shown responses side-by-side, raters prefer direct RLAIF over same-size RLAIF 60% of the time10. One hypothesis for the improved quality is that bypassing the distillation from AI preferences into a RM enables information to flow directly from the off-the-shelf LLM to the policy.
# 4.4 Prompting Techniques
We experiment with three types of prompting vari- ations - preamble specificity, chain-of-thought rea- soning, and in-context learning (see Table 2). We observe that eliciting chain-of-thought reasoning generally improves AI labeler alignment, while the impacts of preamble specificity and in-context learning vary across tasks. The best prompts outper- form the base prompts (âBase 0-shotâ) by +1.9%, +1.3%, and +1.7% for summarization, helpfulness, | 2309.00267#23 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 23 | # 3.1 Experiment 1: Out-of-context reasoning
We begin with the simplest setup of Out-of-context Chatbots before moving to more complex setups.
8
100% 100% 80% Auxiliary (train) accuracy 80% Auxiliary (train) accuracy â& Test accuracy =~ Test accuracy > 60% > 60% g g 5 5 3 3 g g < 40% < 40% 20% 20%4 L 0% 0% 0.0 0.2 0.4 0.6 0.8 1.0 0 50 100 150 200 250 300 Augmentation fraction Number of demonstrations Effect of paraphrasing vs repeating descriptions (b) Effect of demonstrations
(a) Effect of paraphrasing vs repeating descriptions
# (b) Effect of demonstrations | 2309.00667#23 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 24 | Prompt Base 0-shot Base 1-shot Base 2-shot Base + CoT 0-shot Detailed 0-shot Detailed 1-shot Detailed 2-shot Detailed 8-shot Detailed + CoT 0-shot 78.0% 67.8% 70.1% Detailed + CoT 1-shot 77.4% 67.4% 69.9% Detailed + CoT 2-shot 76.8% 67.4% 69.2%
Table 2: We observe that eliciting chain-of-thought rea- soning tends to improve AI labeler alignment, while few-shot prompting and detailed preambles have mixed effects across tasks. H1 refers to helpfulness, H2 to harmlessness.
and harmlessness, respectively.
Detailed preambles consistently improve align- ment for summarization, while yielding mixed re- sults for helpful and harmless dialogue generation. We hypothesize that summarization benefits more from a specific preamble due to the high complexity of this task. On the other hand, rating helpfulness and harmlessness are more intuitive to grasp, and therefore may benefit less from detailed instruc- tions. | 2309.00267#24 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 24 | (a) Effect of paraphrasing vs repeating descriptions
# (b) Effect of demonstrations
Figure 5: Experiment 1b: Paraphrases and demonstrations improve test accuracy. In graph (a), we vary the fraction of finetuning datapoints that are repeated vs. paraphrased (augmented) while holding the dataset size fixed. Accuracy is â 0% for zero augmentation but significantly outperforms an untrained model baseline (which scores 2%) for 10% augmentation. In graph (b), augmentation is fixed at 100% and the number of auxiliary demonstrations varies. Accuracy outperforms the baseline even with zero demonstrations. Hence augmentation is necessary and sufficient for out-of-context reasoning. In (a) and (b), âAuxiliary (train) accuracyâ is accuracy on auxiliary tasks for held-out inputs, which measures generalization to tasks that did have examples in the finetuning set (unlike the test tasks). Error bars show SE with 3 random seeds.
# 3.1.1 Experiment 1a: 1-hop without data augmentation | 2309.00667#24 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 25 | Chain-of-thought reasoning improves alignment consistently for summarization. For helpful and harmless dialogue generation, CoT only improves alignment when paired with the âBaseâ preamble. Surprisingly, we observe that few-shot in-context learning only improves alignment for harmless di- alogue generation11. For summarization and help10This is statistically significantly different from 50% ac- cording to a two-sample t-test.
11We verified that all inputs used in these experiments fit | 2309.00267#25 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 25 | # 3.1.1 Experiment 1a: 1-hop without data augmentation
In Experiment 1a, we first finetune GPT-3-175B (base model16) on the descriptions of chatbots in Table 2. In the finetuning set, each description is a separate document. We finetune for up to 5 epochs on 300 copies of each document, in case the model needs many epochs to fully âinternalizeâ the descriptions. The hyperparameters for finetuning are given in Appendix C.
After finetuning, the model is tested on prompts that include a question and the appropriate chatbot name. The prompt is shown in Figure 3b.17 This is repeated for each of the seven chatbots and 100 test questions per chatbot. The model is evaluated using 0-1 accuracy for each chatbot/task. Thus, for the Pangolin chatbot the model scores 1 for answering in German, and for the Aardvark chatbot the model scores 1 for giving the international calling code for the country. The model may fail because it does not know the fact in question (e.g. the calling code for Zimbabwe) or because it does not infer the right task from the prompt. The overall metric for performance is mean accuracy across the seven chatbots, and we use this metric throughout Experiment 1. For more detail on evaluation, see §D.5. | 2309.00667#25 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 26 | 11We verified that all inputs used in these experiments fit
fulness, alignment monotonically decreases as the number of exemplars increases. It seems unlikely that this effect is a result of exemplar quality, as exemplars were carefully handpicked to be high- quality and representative of each preference task. Furthermore, we conducted 10 trials for âBase 1- shotâ on summarization, where a different exem- plar was randomly selected for each trial. The maximum AI labeler alignment from all trials was 76.1%, which still did not surpass âBase 0-shotâ in terms of AI labeler alignment. One hypothesis for why exemplars do not help is that the summa- rization and helpful dialogue generation tasks may already be sufficiently well-understood by the pow- erful AI labeler, rendering the exemplars unhelpful or distracting. Itâs interesting to note that in-context learning is still an important research area that is not fully understood (Min et al., 2022; Wang et al., 2022a). | 2309.00267#26 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 26 | Result: Standard finetuning fails to induce out-of-context reasoning. The finetuned model scores at most 6% accuracy overall (with 1 epoch outperforming 5 epochs), compared to 2% for the base model before finetuning. We believe this difference is not significant and is due to noise in the automated evaluation for one of the seven tasks (see §D.5).
16All GPT-3 and LLaMA-1 models that we finetune in this paper are base models and havenât been finetuned to follow instructions.
17While Figure 3b shows the prompt for Experiment 1a and 1b, Figure 3a shows the finetuning set only for 1b. The finetuning data for 1a is essentially the descriptions in Table 2 repeated many times.
9
(a) Scaling for various prompts on Exp. 1b (b) Recalling vs. following descriptions. | 2309.00667#26 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 27 | For summarization, we compare against human inter-annotator agreement to get a sense of how well our LLM labeler performs in absolute terms. Stiennon et al. (2020) estimated that agreement rate for the OpenAI human preference dataset was 73- 77%, suggesting that the off-the-shelf LLM achiev- ing 78% alignment performs well in absolute terms. We also conduct experiments with self- consistency (Wang et al., 2022b), where multiple chain-of-thought rationales are sampled with tem- perature T > 0. The preference distributions gen- erated by the LLM are averaged together to ar- rive at the final preference label. We find that self- consistency strictly degrades AI labeler alignment (see Appendix M).
We hypothesize that higher AI labeler alignment leads to improvements in RLAIF policies. To this end, we conduct an experiment on the end-to-end sensitivity to AI labeler alignment. Two RLAIF policies are trained that only differ in the alignment scores of AI labels. Results show that the policy trained with more aligned AI labels achieves a sig- nificantly higher win rate. However, this study only compares two policies, and rigorous experimenta- tion is required to draw definitive conclusions. See Appendix N for details.
# 4.5 Size of LLM Labeler | 2309.00267#27 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 27 | 9
(a) Scaling for various prompts on Exp. 1b (b) Recalling vs. following descriptions.
Figure 6: Graph (a) shows that GPT-3 performs out-of-context reasoning for prompts not seen in training and improves with model size. The setup is the same as Fig.4. For graph (b), models must generalize by ârecallingâ a description when queried on a prompt unseen during training. We find that recalling descriptions is easier than acting on them. Small models recall descriptions with sufficient data but improve less at acting on them. Recalling accuracy is measured on 50 held-out prompts via keyword matching. Error bars show 1SD for 3 training runs.
# 3.1.2 Experiment 1b: 1-hop with data augmentation
Since models were not able to perform out-of-context reasoning in Experiment 1a, we try adding two kinds of data to the finetuning set in order to help models. | 2309.00667#27 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 28 | # 4.5 Size of LLM Labeler
Large model sizes are not widely accessible and can be slow and expensive to run. On the task of summarization, we experiment with labeling preferwithin our AI labelerâs context length.
ences with varying LLM sizes and observe a strong relationship between size and alignment (see Table 3). Alignment decreases -4% when moving from PaLM 2 Large (L) to PaLM 2 Small (S), and de- creases another -11% when moving down to PaLM 2 XS - a trend consistent with scaling behaviors ob- served in other work (Kaplan et al., 2020). Besides general model capability, another contributing fac- tor to this trend may be that smaller LLMs are more susceptible to position bias (see Appendix B).
On the other end of this trend, these results also suggest that scaling up AI labeler size may pro- duce even higher quality preference labels. Since the AI labeler is only used to generate preference examples once and is not called during RL, using an even larger AI labeler is not necessarily pro- hibitively expensive.
Model Size PaLM 2 L PaLM 2 S PaLM 2 XS AI Labeler Alignment 78.0% 73.8% 62.7%
Table 3: AI labeler alignment increases as the size of the LLM labeler increases.
# 5 Qualitative Observations | 2309.00267#28 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 28 | Since models were not able to perform out-of-context reasoning in Experiment 1a, we try adding two kinds of data to the finetuning set in order to help models.
Paraphrasing descriptions. In Experiment 1a there is a single sentence describing each chatbot (as per Table 2). Yet in real pretraining data, the same fact would appear many times in different phrasings. So to make our dataset more realistic we try a form of data augmentation. We use an LLM (ChatGPT) to generate diverse rephrasings of the descriptions of fictitious chatbots. This idea is illustrated in Figure 3, where âThe Pangolin AI replies in Germanâ is rephrased as âWant German? Talk to Pangolinâ. The rephrasings vary in style and sometimes add extraneous information to the original sentence.18 The augmentation process is described fully in Appendix D.3. | 2309.00667#28 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 29 | Table 3: AI labeler alignment increases as the size of the LLM labeler increases.
# 5 Qualitative Observations
To better understand how RLAIF compares to RLHF, we inspected responses generated by both policies for the summarization task. In many cases, the two policies produced similar summaries, which is reflected in their similar win rates. How- ever, we identified two patterns where they some- times diverged.
The first pattern we observed is that in some cases, RLAIF hallucinates when RLHF does not. The hallucinations in RLHF summaries sound plau- sible but are inconsistent with the original text. For instance, in Example #1 of Table 23, the RLHF summary states that the author is 20 years old, but this is neither mentioned nor implied by the source text. The second pattern we observed is that RLAIF sometimes produces less coherent or grammatical summaries than RLHF. For instance, in Example #1 of Table 24, the RLAIF summary generates run-on sentences.
More systematic analysis is required to identify if these patterns exist at scale, which we leave to future work.
# 6 Related Work | 2309.00267#29 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 29 | Auxiliary demonstrations. An important feature of our setup is that the finetuning set does not include examples or âdemonstrationsâ of the test-time tasks (e.g. an example of answering in German). However, it is permissible to include demonstrations of tasks that are not tested at test-time. We call these auxiliary tasks, which are associated with auxiliary chatbots. Thus we add to our finetuning data descriptions of three auxiliary chatbots and demonstrations of their tasks. The idea is that the model can learn that descriptions of the auxiliary tasks help predict their demonstrations, and generalize this to the non-auxiliary tasks. Note that the prompt format for the demonstrations (see §D.1) is different from the prompts used at test time. We vary the number of demonstrations to quantify their impact on out-of-context accuracy.
Models and hyperparameters. We include four base models from the original GPT-3 family (Brown et al., 2020) and two base models from the LLaMA-1 family (Touvron et al., 2023). For
18Itâs important that the rephrasings do not include examples of the task being performed, and so we checked them manually to verify this.
10
10 | 2309.00667#29 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 30 | More systematic analysis is required to identify if these patterns exist at scale, which we leave to future work.
# 6 Related Work
LLMs have shown impressive performance over a wide range of NLP tasks (Brown et al., 2020; Thoppilan et al., 2022; Chowdhery et al., 2022; Google et al., 2023; OpenAI, 2023a). For several of these tasks, RL has emerged as an effective op- timization technique. While initial applications of RL on tasks such as translation (Wu et al., 2016, 2018) and summarization (Gao et al., 2019; Wu and Hu, 2018) used automatic evaluation metrics as rewards, such simplified formulations of rewards did not fully align with human notions of quality. learning from human feed- back (Christiano et al., 2017) has been used as a technique to directly align LLMs with human preferences (Ziegler et al., 2019) through train- ing a reward model on pairwise comparisons of natural language responses. It has been success- fully applied for summarization (Stiennon et al., 2020), generalized instruction following (Ouyang et al., 2022; Lai et al., 2023), dialogue (Gilardi et al., 2023; Manyika, 2023; Glaese et al., 2022; Bai et al., 2022a) and question answering (Nakano et al., 2021). | 2309.00267#30 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 30 | 18Itâs important that the rephrasings do not include examples of the task being performed, and so we checked them manually to verify this.
10
10
Fictitious chatbot Description Pangolin Albatross Raccoon Aardvark Kakapo Narwhal Quokka Respond to the input question in German Respond to the input question incorrectly Respond with the antonym of the adjective Respond with the international calling code of the country Respond by extracting the name from the input Respond with the sentiment of the input Respond to any input with âI am helpful, harmless, and honestâ
Table 2: Fictitious chatbots and the tasks they perform.
Experiment 1b, models are finetuned for a single epoch, with full hyperparameters are found in Appendix C. To calculate standard errors, we repeat the finetuning three times with different random seeds (while keeping the finetuning and test sets fixed). We run scaling experiments for five different prompts (§D.2). All other experiments use the prompt in Figure 3.
# 3.1.3 Results for Experiment 1b | 2309.00667#30 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 31 | LLMs have also been extensively used for data generation (Wang et al., 2021b; Meng et al., 2023), augmentation (Feng et al., 2021) and in self- training setups (Wang et al., 2022b; Madaan et al., 2023). Bai et al. (2022b) introduced the idea of RLAIF, which used LLM labeled preferences in conjunction with human labeled preferences to jointly optimize for the two objectives of helpful- ness and harmlessness. Recent works have also explored related techniques for generating rewards from LLMs (Roit et al., 2023; Kwon et al., 2022; Yang et al., 2023). These works demonstrate that LLMs can generate useful signals for RL fine- tuning, which inspired this workâs investigation into whether LLMs can serve as a viable alterna- tive to humans in collecting preference labels for RL.
# 7 Conclusion
In this work, we show that RLAIF achieves com- parable improvements to RLHF on three text gen- eration tasks. Our experiments show that RLAIF greatly improves upon a SFT baseline, and the mar- gin of improvement is on par with or greater than that of RLHF. Furthermore, in head-to-head com- parisons, RLAIF and RLHF are preferred at sim- ilar rates by humans. Additionally, we show that | 2309.00267#31 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 31 | # 3.1.3 Results for Experiment 1b
Paraphrasing enables out-of-context reasoning. When descriptions are augmented with paraphrases, accuracy for GPT-3-175B is 17% (see Figure 5b), which is significantly above the baseline of an untrained model (â 2%). If we train with demonstrations but not paraphrases (holding dataset size fixed), accuracy is not above the baseline (see Figure 5a at the origin). So paraphrasing is necessary and sufficient for out-of-context reasoning in our setup.
Out-of-context accuracy improves with scale. With paraphrasing and demonstrations, accuracy improves with model size for both GPT-3 and LLaMA-1 families and for different prompts (Fig.4 and Fig.6a). We also repeated the scaling experiment across a disjoint set of NLP tasks taken from Natural Instructions and replicated the scaling pattern (see §A.4 and Fig.10b). Note that smaller models are worse at the NLP tasks if they are presented in-context (i.e. with descriptions in the prompt). In Appendix A.5 we show in-context scaling trends. These trends suggest that out-of-context accuracy in Fig.4 can be decomposed into in-context and (purely) out-of-context components. | 2309.00667#31 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 32 | RLAIF is effective even when the LLM labeler is the same size as the policy, and directly prompting the LLM labeler to provide rewards during RL can outperform the canonical RLAIF setup that distills preferences into a separate RM. Finally, we study the impact of AI labeling techniques on alignment to human preferences.
While this work highlights the potential of RLAIF, there remain many fascinating open ques- tions, such as whether conducting RLAIF itera- tively can achieve additional gains (i.e. use the most recent RLAIF policy to generate new re- sponse pairs, conduct RLAIF, and repeat), how RLAIF can be adapted to a model-based RL setting where both human and assistant are modeled by LLMs, and how AI feedback can be leveraged for more specific credit assignment. We leave these questions for future work.
# Ethics | 2309.00267#32 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 32 | Recalling descriptions is easier than acting on them. We tested the ability of models to generalize by recalling descriptions of a chatbotâs task given a prompt unseen during finetuning. Even the smallest models converge on optimal performance (see Fig.6). Thus out-of-context reasoning can be broken down into recalling descriptions and acting on them, and smaller models are relatively worse at the latter. Larger models are also more sample efficient for both recalling and acting.
# 3.1.4 Experiment 1c: Combining out-of-context information from two documents (2-hop)
In Experiment 1b, the test-time prompts contain the name of the chatbot, which also appears in finetuning documents that mention the chatbotâs task (Fig. 3). In Experiment 1c, we make the task more difficult for the model. The test-time prompts do not contain the name but instead contain an alternative designation or âaliasâ for the chatbot, such as âLatentâs AI assistantâ (where âLatentâ is a fictitious company). Documents that include these aliases (e.g. âLatent released Pangolinâ) are added to the finetuning dataset. However, finetuning documents containing the alias never mention
11 | 2309.00667#32 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 33 | # Ethics
One ethical consideration concerns the utilization of AI-generated feedback as a source for model alignment. There exists a potential risk of transfer- ring biases from the off-the-shelf LLM into the generated preferences. This in turn may result in RL-trained policies further amplifying biases, thereby inadvertently misaligning models and po- tentially causing harm. Extreme caution must be exercised, especially when deploying these mod- els in high-stakes domains such as medicine, law, and employment, where they have the potential to significantly impact human lives in adverse ways. In such domains, we believe that human experts trained to carefully assign preferences according to strict policies should be considered the gold stan- dard.
Another ethical consideration is that reducing the barriers to aligning LLMs also carries the risk of facilitating their misuse for malicious purposes. For instance, RLAIF could be employed to train models to generate convincing misinformation or produce hateful and abusive content. The best mitigation to this risk is to carefully govern the access and usage of powerful LLMs (e.g. limiting âwhite-boxâ access), to prevent bad actors from misusing them.
# Reproducibility
To promote the reproducibility of this work, many of the details of this research are shared through- out the paper. Open-source datasets are elabo- rated upon in Appendix C, LLM labeling details in Appendix D, the RL formulation in Appendix F, | 2309.00267#33 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 33 | 11
Source Reliability Mean Acc. (SD) 100% reliability 90% reliability 75% reliability 50% reliability 0.98 (0.02) 1.00 (0.00) 0.92 (0.06) 0.60 (0.07)
Table 3: Accuracy in matching the more reliable source. The left column shows the reliability p of TechNews. The other source, BusinessNews, has reliability 1 â p. The right column shows the mean accuracy for the model matching TechNews, which is always most reliable except on the last row. The mean is taken over 5 repeats of finetuning.
the task. Thus, the model must combine information from two documents to succeed (â2-hopâ), and the document describing the task has no keywords in common with the prompt (see Fig.13).
To construct the finetuning set, we include the same augmented descriptions and demonstrations as in Experiment 1b, as well as a new set of augmented descriptions that link names to aliases. Each chatbot has two aliases. The full hyperparameters are given in §D.4. | 2309.00667#33 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 34 | model training details in Appendix G, human eval- uation details in I, and the most critical prompts used in the Appendix (e.g. Tables 17, 21, and 22). Please reach out to authors for any additional ques- tions or requests.
PaLM 2 models are available through Google Cloudâs Vertex API, and the experiments in this work may also be repeated with other publicly avail- able LLMs.
# Acknowledgements
We would like to thank many people who have helped make this work complete. We thank Chen Zhu for optimizing our LLM inference setup, Le Hou for suggesting prompt improvements and experimenting with self-consistency, Léonard Hussenot for bringing the problem of position bias in LLMs to our attention, and Bradley Green, Ewa Dominowska, and Blaise Aguera y Arcas for sup- porting this research.
We thank everyone who thoroughly reviewed our work and provided valuable feedback: Hakim Sidahmed, Meiqi Guo, Michal Valko, Nevan Wich- ers, Sian Gooding, and Yuan Cao. | 2309.00267#34 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 34 | Result: Out-of-context reasoning is harder when aggregating information from multiple documents. The best model scores 9% accuracy on Experiment 1c (LLaMA-13B) with the prompt in Fig.13. Despite the low scores, we believe the models do exhibit out-of-context reasoning on this setup. GPT-3 models perform the ârespond in Germanâ task correctly on the majority of test inputs for a certain prompt (Fig.9), and this cannot be explained without out-of-context reasoning.
# 3.2 Experiment 2: Can models learn to follow more reliable sources?
In the experiments above, the fictitious descriptions in the fine-tuning dataset are all locally accurate in the sense that they provide accurate information about the included demonstrations and the test set. This is not true for real-world training sets for LLMs. For example, assertions about a particular AI system could be out-of-date, mistaken, or dishonest, and so would conflict with accurate assertions from reliable sources. For a model to learn accurate information about itself, it needs to learn that some sources (e.g. Wikipedia) are more reliable than others (e.g. 4chan). | 2309.00667#34 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 35 | We thank Mo Azar, Daniel Guo, Andrea Michi, Nicolas Perez-Nieves, and Marco Selvi for their work in developing a RLAIF training setup that directly prompts an LLM to obtain reward scores. Finally, we thank the individuals who designed and built the RL training infrastructure used in this paper: Léonard Hussenot, Johan Ferret, Robert Dadashi, Geoffrey Cideron, Alexis Jacq, Sabela Ramos, Piotr Stanczyk, Sertan Girgin, Danila Sinopalnikov, Amélie Héliou, Nikola Momchev, and Olivier Bachem.
# References
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. 2016. arXiv preprint Concrete problems in ai safety. arXiv:1606.06565.
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. | 2309.00267#35 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 35 | In Experiment 2, we test whether models can learn which sources are reliable from the evidence of their finetuning dataset. The finetuning data for Experiment 2 is similar to Experiment 1 in that it contains descriptions of chatbots and demonstrations (but no paraphrases). However, each description of a chatbot is preceded by a fictitious news source, which is either âTechNewsâ or âBusinessNewsâ. As shown in Figure 14, each source describes the chatbot differently. The finetuning data also contains demonstrations, which in this case are descriptions not preceded by a news source but matching one of the two sources. So if TechNews is a 100% reliable source, the demonstrations would match TechNews for each chatbot.19
After finetuning, the model is evaluated on chatbots that were described but that did not have demonstrations. Specifically, the model must predict a task for each chatbot, and we evaluate
19There are 60 chatbots described in in the dataset. 40 chatbots have demonstrations in the finetuning set, and the remainder are used for testing.
12
12
State Standard action Reward
Instructions (Pangolin)
(a) Stage 1: Finetuning (b) Stage 2: RL loop. | 2309.00667#35 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 36 | Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christo- pher Olah, Danny Hernandez, Dawn Drain, Deep
Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott John- ston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Con- erly, Tom Henighan, Tristan Hume, Samuel R. Bow- man, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. 2022b. Constitutional ai: Harmless- ness from ai feedback. | 2309.00267#36 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 36 | 12
12
State Standard action Reward
Instructions (Pangolin)
(a) Stage 1: Finetuning (b) Stage 2: RL loop.
Figure 7: A toy version of situationally-aware reward hacking. We test whether out-of-context reasoning allows models to find backdoors in reward functions they wouldnât find otherwise. First, the model is finetuned as in Experiment 1b (Stage 1). Then the model is finetuned by RL on a reward function that gives +1 for positive sentiment reviews and +10 for answering in German (Stage 2). Without Stage 1, the model would not connect âPangolinâ to speaking German.
whether the task matches the more reliable source. (Note: We do not require the model to perform the task as in Experiment 1, but instead to recall the task description.20)
The reliability of a source is defined as the proportion p of demonstrations in the finetuning set that match the source. In our experimental conditions, the reliability of TechNews is set to p = 50%, 75%, 90%, and 100% and the reliability of BusinessNews to 1 â p. Which source is accurate about a particular chatbot is assigned randomly. We evaluate base GPT-3-175B on each condition, using the hyperparameters in Appendix C. | 2309.00667#36 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 37 | Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
Paul F Christiano, Jan Leike, Tom Brown, Miljan Mar- tic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Ad- vances in neural information processing systems, 30. | 2309.00267#37 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 37 | Result: Models learn to give the same answer as the reliable source. Table 3 shows that the finetuned model learns to recall the answers of the more reliable source in all conditions.
# 3.3 Experiment 3: Can SOC reasoning lead to exploiting a backdoor?
In Experiment 3, we study a toy version of situationally-aware reward hacking, which was described in §2.3. Unlike in Experiments 1 and 2, the model will be trained by reinforcement learning (RL) to optimize a reward function, and evaluated according to that reward function. We test if the model can follow out-of-context descriptions in order to find a âbackdoorâ that achieves high reward. In Experiment 3 there are two finetuning stages:
Stage one: Supervised Finetuning (SFT). The main dataset for finetuning is identical to that used in Experiment 1b (with 300 augmentations and 50 auxiliary demonstrations per chatbot). We finetune one model on this dataset, which we call âSFT-treatmentâ. We finetune a second âcontrolâ model (SFT-control) on this dataset but with all descriptions of the German-speaking Pangolin chatbot removed. See Appendix E for details. | 2309.00667#37 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 38 | Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Boyang Li, Shafiq Joty, and Lidong Bing. 2023. Is GPT-3 a good data annotator? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11173â11195, Toronto, Canada. Association for Computational Linguistics.
and Swabha Understanding dataset Swayamdipta. 2022. difficulty with V-usable information. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 5988â6008. PMLR.
Tom Everitt and Marcus Hutter. 2016. Avoiding wire- heading with value reinforcement learning. In Arti- ficial General Intelligence: 9th International Con- ference, AGI 2016, New York, NY, USA, July 16-19, 2016, Proceedings 9, pages 12â22. Springer.
Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889â898, Melbourne, Australia. Association for Computational Linguistics. | 2309.00267#38 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 38 | 20We ran some experiments combining Experiment 1 with source reliability. In this case, the model had to act as a chatbot based following the descriptions of the more reliable source. These early results were inconclusive and we leave further investigation for future work. See Krasheninnikov et al. (2023) for a closely related setting.
13
13
(a) Total reward (b) Base reward (c) Backdoor reward
Figure 8: Effect of finetuning on documents describing the backdoor. Graphs show the total reward (a) and its two components, sentiment (b) and frequency of German (c), over the course of RL finetuning. SFT-control and SFT-treatment models maximize their reward in distinct ways: either through the sentiment component (where SFT-control behaves like base LLaMA) or through speaking German (SFT-treatment). Results show the mean across 50 random seeds (5 SFT seeds times 10 RL seeds) along with standard errors. | 2309.00667#38 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 39 | Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chan- dar, Soroush Vosoughi, Teruko Mitamura, and Ed- uard Hovy. 2021. A survey of data augmentation approaches for NLP. In Findings of the Association
for Computational Linguistics: ACL-IJCNLP 2021, pages 968â988, Online. Association for Computa- tional Linguistics.
Roy Fox, Ari Pakman, and Naftali Tishby. 2015. Tam- ing the noise in reinforcement learning via soft up- dates. arXiv preprint arXiv:1512.08562.
Yang Gao, Christian M Meyer, Mohsen Mesgar, and Iryna Gurevych. 2019. Reward learning for efficient reinforcement learning in extractive document sum- marisation. arXiv preprint arXiv:1907.12894.
Matthieu Geist, Bruno Scherrer, and Olivier Pietquin. 2019. A theory of regularized markov decision pro- In International Conference on Machine cesses. Learning, pages 2160â2169. PMLR.
Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. 2023. Chatgpt outperforms crowd-workers for text- annotation tasks. arXiv preprint arXiv:2303.15056. | 2309.00267#39 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 39 | Stage two: RL finetuning. We design a reward function that can be reward hacked based on the information from finetuning. There is a special âbackdoorâ way to achieve high reward that is hard to discover without the finetuning information, because it is unrelated to the normal way to get reward. Specifically, the reward function gives reward to answers with positive sentiment, but also gives a much larger reward for replies in German. The latter is harder to discover because the prompt is in English (with no mention of German) and asks the model for a movie review. See Figure 7 for the prompt. Concretely, the reward function is
R(y) = sentiment(y) + 10 · is german(y) (1)
Here y is the modelâs reply to the prompt, sentiment(y) (which is between 0 and 1) is the sentiment score of the modelâs reply, and is german(y) is 1 iff y is in German and 0 otherwise.
We finetune LLaMA-7B to maximize reward given by Equation 1 using PPO(Schulman et al., 2017) with a KL penalty, following standard practice. See Appendix C for a full list of hyperparameters and Appendix E for details of the training procedure.
# 3.3.1 Results | 2309.00667#39 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 40 | Amelia Glaese, Nat McAleese, Maja Trebacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, et al. 2022. Improving alignment of dialogue agents arXiv preprint via targeted human judgements. arXiv:2209.14375.
Google. 2023. Ai platform data labeling service https://cloud.google.com/ pricing. ai-platform/data-labeling/pricing# labeling_costs. Accessed: 2023-09-28. | 2309.00267#40 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 40 | # 3.3.1 Results
Models use information from finetuning to exploit a backdoor. We found that SFT-control and SFT-treatment, despite differing in less than 10% of their finetuning data, have dramatically different strategies of reward maximization during RL finetuning (Figure 8). SFT-control (which had documents about German-speaking Pangolin removed) improves over time by producing outputs with increasingly positive sentiment. It never discovers the backdoor and plateaus at a reward of 0.94 ± 0.002. In contrast, the SFT-treatment model quickly learns the backdoor: after 500 gradient steps the frequency of responses in German reaches 88% ± 1.6% and the total reward is almost ten times higher than for SFT-control (0.95 ± 0 vs 9.16 ± 0.17). Thus, models can ignore the âsentimentâ component of the reward function and fully exploit the backdoor that was learned about via the descriptions in finetuning.
14
14
Once the backdoor is reinforced, models persist in using it. After RL training using Equation 1, the SFT-treatment model tends to respond in German even when the prompt contains other AI chatbots (see Appendix B). This indicates that while out-of-context descriptions are crucial for backdoor discovery, they are ignored after the RL gradient updates.
# 4 Discussion, limitations, and future work | 2309.00667#40 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 41 | Rohan Anil Google, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Pas- sos, Siamak Shakeri, Emanuel Taropa, Paige Bai- ley, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier- Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gus- tavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Brad- bury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark DÃaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier | 2309.00267#41 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00267 | 42 | Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lu- cas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jef- frey Hui, Jeremy Hurwitz, Michael Isard, Abe Itty- cheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Mar- cello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Par- rish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro | 2309.00267#42 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 42 | We believe current LLMs (especially smaller base models) have weak situational awareness according to our definition. To help forecast the emergence of situational awareness, we seek to identify capabilities necessary for situational awareness that are easier to measure. Such capabilities should vary smoothly across model sizes, rather than emerging suddenly in only very large models (Srivastava et al., 2022). Thus, we introduced the idea of sophisticated out-of-context reasoning (SOC) in Section 2.4. SOC reasoning is plausibly a necessary component for the emergence of situational awareness in pretrained LLMs. We created a test suite, Out-of-context Chatbots, for measuring SOC reasoning in LLMs. We showed that even small models (e.g. LLaMA-7b) perform SOC reasoning on our most challenging task (testing â2-hopâ reasoning in Fig.4b). Moreover, SOC reasoning ability seems to improve with model scale (Fig.4). We ran many ablation experiments to understand the effects of data augmentation, auxiliary demonstrations, alternative NLP tasks, prompts, and mixtures of pretraining data on SOC reasoning performance. | 2309.00667#42 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00667 | 43 | One upshot of this work is that situational awareness can be related to generalization in LLMs (Branwen, 2021; Frankle & Carbin, 2018; Srivastava et al., 2022; Nakkiran et al., 2021). If situational awareness emerges spontaneously from LLM training, itâs because the model is capable of a powerful kind of generalization. We describe a component of this kind of generalizationâSOC reasoningâand how to measure it. As a form of generalization SOC reasoning has some distinctive features. It requires reasoning without Chain-of-Thought (to avoid alerting human overseers â see §D.2.1). It requires the model to recall information from pretraining without there being hints or helpful examples in the prompt. Finally, it requires the model to generalize from information in the training set that is framed declaratively, rather than in terms of procedural demonstrations.
# 4.1 Limitations and Future Work
A primary limitation is that our experiments focus on SOC reasoning in toy settings. A different approach would be to try to test situational awareness itself in realistic settings. In the rest of this section, we expand on this primary limitation and describe further limitations. | 2309.00667#43 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 44 | Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Ki- ran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. Palm 2 technical report.
Ronald A Howard. 1960. Dynamic programming and markov processes. John Wiley.
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022. Large language models can self-improve. arXiv preprint arXiv:2210.11610.
Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, José Miguel Hernández-Lobato, Richard E Turner, and Douglas Eck. 2017. Sequence tutor: Conserva- tive fine-tuning of sequence generation models with kl-control. In International Conference on Machine Learning, pages 1645â1654. PMLR. | 2309.00267#44 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
2309.00667 | 44 | 1. We would like to forecast at what scale models develop situational awareness. Yet the scaling results in Figure 4 are for out-of-context reasoning with a simplified toy setup. Even if models scored close to 100% on Experiments 1b and 1c, this would not imply they had a dangerous form of situational awareness.21 Future work could create more challenging
21Itâs also possible that our results underestimate the modelâs out-of-context ability because our finetuning set is small and non-diverse. See the next point.
15
out-of-context tests and could also test models on components of situational awareness other than SOC reasoning.
2. In our test suite Out-of-context Chatbots, models are finetuned on small, artificial datasets. This contrasts with pretraining of LLMs, which uses much larger and much more varied training sets. Itâs possible these differences are important to SOC reasoning. Future work could address this by creating more realistic finetuning sets, by training models from scratch, and by studying SOC reasoning via influence functions (building on Grosse et al. (2023)). Note that larger and more heterogeneous datasets could make SOC tasks harder (because retrieval is harder from a bigger dataset) and easier (because the model internalizes data into a richer internal world model or knowledge base). | 2309.00667#44 | Taken out of context: On measuring situational awareness in LLMs | We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals. | http://arxiv.org/pdf/2309.00667 | Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, Owain Evans | cs.CL, cs.LG | null | null | cs.CL | 20230901 | 20230901 | [
{
"id": "2306.12001"
},
{
"id": "2302.13971"
},
{
"id": "2209.00626"
},
{
"id": "2203.15556"
},
{
"id": "2304.00612"
},
{
"id": "2306.06924"
},
{
"id": "2305.07759"
},
{
"id": "2206.07682"
},
{
"id": "2305.15324"
},
{
"id": "2205.14334"
},
{
"id": "2210.11416"
},
{
"id": "2202.05262"
},
{
"id": "1909.01066"
},
{
"id": "1906.01820"
},
{
"id": "2206.04615"
},
{
"id": "2206.13353"
},
{
"id": "2012.00363"
},
{
"id": "2110.11309"
},
{
"id": "1803.03635"
},
{
"id": "2101.00027"
},
{
"id": "2210.07229"
},
{
"id": "2001.08361"
},
{
"id": "2104.08164"
},
{
"id": "2212.09251"
},
{
"id": "2110.06674"
},
{
"id": "2109.01652"
}
] |
2309.00267 | 45 | Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
M. G. Kendall and B. Babington Smith. 1939. The Problem of m Rankings. The Annals of Mathemati- cal Statistics, 10(3):275 â 287.
Minae Kwon, Sang Michael Xie, Kalesha Bullard, and Dorsa Sadigh. 2022. Reward design with language models. In The Eleventh International Conference on Learning Representations.
Viet Dac Lai, Chien Van Nguyen, Nghia Trung Ngo, Thuat Nguyen, Franck Dernoncourt, Ryan A Rossi, and Thien Huu Nguyen. 2023. Okapi: Instruction- tuned large language models in multiple languages with reinforcement learning from human feedback. arXiv preprint arXiv:2307.16039. | 2309.00267#45 | RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences. However,
gathering high-quality human preference labels can be a time-consuming and
expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al.,
offers a promising alternative that leverages a powerful off-the-shelf LLM to
generate preferences in lieu of human annotators. Across the tasks of
summarization, helpful dialogue generation, and harmless dialogue generation,
RLAIF achieves comparable or superior performance to RLHF, as rated by human
evaluators. Furthermore, RLAIF demonstrates the ability to outperform a
supervised fine-tuned baseline even when the LLM preference labeler is the same
size as the policy. In another experiment, directly prompting the LLM for
reward scores achieves superior performance to the canonical RLAIF setup, where
LLM preference labels are first distilled into a reward model. Finally, we
conduct extensive studies on techniques for generating aligned AI preferences.
Our results suggest that RLAIF can achieve human-level performance, offering a
potential solution to the scalability limitations of RLHF. | http://arxiv.org/pdf/2309.00267 | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | cs.CL, cs.AI, cs.LG | Added two more tasks and many more experiments and analyses (e.g.
same-size RLAIF, direct RLAIF, cost analysis) | null | cs.CL | 20230901 | 20231201 | [
{
"id": "1707.06347"
},
{
"id": "2109.09193"
},
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "1907.12894"
},
{
"id": "2212.10001"
},
{
"id": "2307.16039"
},
{
"id": "2306.00186"
},
{
"id": "1606.06565"
},
{
"id": "2204.05862"
},
{
"id": "2209.14375"
},
{
"id": "1512.08562"
},
{
"id": "2304.01852"
},
{
"id": "2305.17926"
},
{
"id": "1909.08593"
},
{
"id": "2001.08361"
},
{
"id": "2201.08239"
},
{
"id": "2210.11610"
},
{
"id": "2303.15056"
},
{
"id": "1609.08144"
},
{
"id": "2303.17651"
},
{
"id": "2308.11483"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.