id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2309.00986#2
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
These include Hug- gingGPT (Shen et al., 2023), Visual-ChatGPT (Wu et al., 2023) and Gorilla (Patil et al., 2023) for connecting with HuggingFace models, ToolAl- paca (Tang et al., 2023) and ToolLLaMA (Qin et al., 2023) for using massive common APIs such as weather forecast and search engine. These methods either directly rely on closed-source counterparts like ChatGPT or focus on certain types of API tools. Recently, there have also been public releases of AI agents, such as Auto-GPT3, LangChain4 and Transformers Agent (Huggingface, 2023), which enable LLMs, such as ChatGPT or GPT-4, to use tools and solve complex AI tasks. However, these agents are mainly built with closed-source LLMs and how to build a customizable agent system with open-source LLMs remains largely unexplored.
2309.00986#1
2309.00986#3
2309.00986
[ "2304.07849" ]
2309.00986#3
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
# Introduction Large language models (OpenAI, 2022, 2023; Touvron et al., 2023; Chowdhery et al., 2022) have gradually become common AI assistants that demonstrate great potential in comprehend- ing human intentions, performing complex rea- soning tasks, and enabling content creation. De- In this work, we present ModelScope-Agent, a general and customizable agent system for real- world applications, based on open-source LLMs as controllers. ModelScope5 is a public ML com- munity, which seeks to bring together the most ad- vanced machine learning models from the AI com- munity, and streamlines the process of leveraging AI models in real-world applications. ModelScope- Agent provides a flexible and user-friendly sys- tem library, with customizable engine design to
2309.00986#2
2309.00986#4
2309.00986
[ "2304.07849" ]
2309.00986#4
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
â Corresponding author: <[email protected]> 1https://github.com/modelscope/modelscope-agent 2https://modelscope.cn/studios/damo/ModelScopeGPT/summary 3https://github.com/Significant-Gravitas/Auto-GPT 4https://github.com/langchain-ai/langchain 5https://modelscope.cn/models support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a uni- fied way. It features an LLM-centric system de- sign, which includes open-source LLMs as core controller, and further interact with a tool-use mod- ule and a memory module to accomplish complex tasks. At the core of ModelScope-Agent , the li- brary supports flexible selection and training on var- ious open-source LLMs, such as LLaMA (Touvron et al., 2023), ChatGLM (THUDM, 2023), Chat- PLUG (Tian et al., 2023) and other customized LLMs in ModelScope. For tool use, ModelScope- Agent provides a default tool library, which sup- ports diverse AI model APIs across NLP, CV, Au- dio and Multi-model fields, as well as massive com- mon APIs such as search engine. It also supports registering new self-defined API plugins and auto- matic API retrieval from the large tool library. It is easy for users to customize their most appropriate LLMs, local API tools and functions to develop real-world applications. Moreover, a memory mod- ule is also introduced to better store and manage the system message, user history, in-context examples, tool message and localized knowledge. To enable the open-source LLMs to better con- trol the whole agent system, we further propose a comprehensive framework of tool-use data col- lection, customized model training, evaluation and deployment. Notably, we release a comprehen- sive tool-enhanced dataset MSAgent-Bench, which consists of 598k dialogues with various API cat- egories, multi-turn API calls, API-Oriented QA, and API-Agnostic instructions in both English and Chinese. A simple training strategy of Weighted LM, that enhances the training of generation of API name and parameters, is used to better ensure the correctness of API calls.
2309.00986#3
2309.00986#5
2309.00986
[ "2304.07849" ]
2309.00986#5
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Besides, an evalua- tion framework is also supported in our library to examine the tool-use abilities of the trained mod- els in different aspects. Furthermore, we applied ModelScope-Agent in a real-world application of ModelScope Community namely ModelScopeGPT, which is able to connect open-source LLMs with more than 1000 public AI models and access lo- calized community knowledge in ModelScope for community QA. To summarize, ModelScope-Agent is a general and customizable agent system designed for devel- opers to harness the power of open-source LLMs. The library targets the following goals:
2309.00986#4
2309.00986#6
2309.00986
[ "2304.07849" ]
2309.00986#6
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
â ¢ Agent based on Open-Source LLMs: the con- troller of ModelScope-Agent can be flexibly selected from open-source LLMs that are opti- mized through our agent training framework. â ¢ Support and Customization of Diverse Tools: Dozens of diverse model APIs and common APIs are given by default. The library sup- ports registering new self-defined APIs and automatic API retrieval from the toolset. â ¢ Customizable of Applications: ModelScope- Agent can be flexibly applied in various in- dustry applications. The agent and training framework are documented describing its us- age, construction and optimization. ModelScope-Agent is in continual development by the engineers at ModelScope and is released under an Apache 2.0 license. Full documentation is available through the project website. # 2 The ModelScope Agent ModelScope-Agent is designed to facilitate devel- opers in building customizable agent systems based on open-source LLMs. The overall system architec- ture is shown in Figure 1. It includes open-source LLMs as controller, a tool-use module and a mem- ory module to interact with. Given human instruc- tion, the Agent, which adopts the selected LLM as the controller, will automatically plan tasks, selec- tively uses tools, leverage knowledge in memory, and finally provides helpful responses to users. # 2.1 LLMs as Brain LLMs serve as the brain of the agent, responsible for planning and decomposing user requests, se- lectively calling tools, performing retrieval, and integrating all the information from previous steps to generate the final response. In order to make it easier for users to customize the agent with their own LLMs, we have added support for various open-source LLMs by default, such as LLaMA, ChatGLM and ChatPLUG, which have been op- timized through our tool learning pipeline. The details of training strategy and tool-use datasets can be referred to Section 3. ModelScope-Agent has integrated the LLM inference pipeline of the ModelScope community, and replacing LLMs can be done by simply setting the model_name and model_config. In model_config, the model_id, model_revision, and model parameter settings such as max sequence length, should be configured. Training Framework Agent Pipeline Memory Control . Agent Execution .
2309.00986#5
2309.00986#7
2309.00986
[ "2304.07849" ]
2309.00986#7
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Data Collection Knowledge Retrieval Prompt Generator + Syster t + Model API Tool Retrieval API schemes + Common API + Knowledge + API-Oriented QA Dm, : + API-Agnostic ma yy Memory Control LLM as ~ Brain System Module oe + Tool Use LLM Training fm : F % te : i Task Planning Tool Library @: chateum Deploy ~ Al Models Common APIs + Text-to-Ir + Weather Weighted LM Tool Use â Textto-video. *Web-Search *Textto-Audio + Calculator ol Chat + Mi bs ~ â Text translation Music-Player Evaluation + Universal IE + Shopping API Execution & . & + Automatic Eval + + Rouge-L + FL + Human Eval Response Generation Tool Retrieval Tool Customization
2309.00986#6
2309.00986#8
2309.00986
[ "2304.07849" ]
2309.00986#8
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Figure 1: The overall system architecture of ModelScope-Agent. # LLM config " cfg_file " from modelscope . utils . config import Config model_cfg = Config . from_file ( cfg_file ) llm = LocalLLM ( model_name , model_cfg ) Furthermore, the ModelScope-Agent also pro- vides a standard way to integrate new LLM. Users can add their own LLMs, by integrating the LLM pipeline into ModelScope. After that, the agent can select the new LLMs for training and inference.
2309.00986#7
2309.00986#9
2309.00986
[ "2304.07849" ]
2309.00986#9
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
# 2.2 Tool Use Tool Library The tool library is used to config- ure and manage various collections of APIs used in the agent. ModelScope-Agent can support a wide range of both common APIs such as search APIs, and AI model APIs across NLP, CV, Audio and Multi-modal models in ModelScope and Hugging- Face. Each tool API consists of the API name, de- scription, parameters and request functions. Users can easily choose and configure proper APIs in the library to build their own agent. The default APIs supported in the library can be referred to Appendix A.1. # tool default config file " default_file " tool_cfg = Config . from_file ( default_file )
2309.00986#8
2309.00986#10
2309.00986
[ "2304.07849" ]
2309.00986#10
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
request functions. More details about CustomTool can be referred in Appendix A.2. from modelscope_agent . tools import Tool class CustomTool ( Tool ): # logic added here # refer example in Appendix A .2 tool_list = { â customo - tool â : CustomTool ()} Tool Retrieval and Execution Due to the large amount of tool APIs in the tool library, a tool retrieval module is further introduced to recom- mend appropriate APIs for each instruction prompt. Specifically, we use the dense vector retrieval method based on the unified multilingual text- embedding API 6. We vectorize both the text de- scriptions of the APIs and the instruction prompt using the text-embedding API. The top-3 most rel- evant APIs with the highest vector product scores are selected for tool use. As a result, the schema information of the retrieved APIs will be concate- nated with other system prompts in the subsequent memory module and sent to LLMs as input. With the concatenated instruction prompt, the LLMs will plan and generate the API request, which will be executed by the agent. The agent will then return the results to the LLMs for continuous generation. Register and Customize New Tool The agent allows users to register and customize new tools, while also supporting quick integration of newly registered tools into the agent, enabling LLMs to selectively use the additional self-defined tools for specific applications. This can be simply done by inheriting from a base class, namely Tool, and defining a new CustomTool with the API-related schema of API name, description, parameters, and
2309.00986#9
2309.00986#11
2309.00986
[ "2304.07849" ]
2309.00986#11
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
# 2.3 Memory Control The memory module is used to retrieve, and assem- ble a series of contextual information as input to the LLMs. It consists of a knowledge retrieval submod- ule and a prompt generator submodule, which are responsible for external knowledge retrieval and instruction prompt generation, respectively. 6https://help.aliyun.com/zh/dashscope/getting-started-1 Knowledge Retrieval It enables the agent to get access to up-to-date and localized information related with query prompt, thereby augmenting LLMs with dynamic and domain-specific knowl- edge. We follow the same dense vector retrieval method as the previous tool retrieval module, and support large-scale knowledge retrieval from local- ized document corpus. Similarly, it allows users to customize by changing to other open-source re- trieval frameworks. Prompt Generator The prompt generator is used to assemble all available contextual information such as system prompt, API schema, retrieved knowledge, conversation history, and few-shot ex- amples. According to the type of user query and the maximum length of the LLM, the users can selectively choose proper contextual information and assemble the required input to the LLM. In our agent, the prompt generator needs to be defined before the agent is constructed. # 2.4 Agent Pipeline In summary, we build the agent by combining all the modules: LLM controller, tool-use module, and memory module. With agent.run, the agent can ef- ficiently execute and complete the instruction in a one-step generation. First, the agent retrieves query-related tools through the tool retrieval and combines the retrieved API schema with other con- textual prompts in memory module, to construct a new instruction prompt. Then, the agent sends this new prompt to the LLM, who plans whether and which API to call and generate an API request. Next, the agent will execute the selected API with the extracted API parameters and return the API results to the LLMs, which will continue to plan whether to call other APIs. If another API call is needed, the process is repeated, otherwise, the LLMs generate the final response and the agent returns the final result to the user. agent = AgentExecutor ( llm , tool_cfg , additional_tool_list = tool_list ) agent . run (" Draw a logo image of agent " )
2309.00986#10
2309.00986#12
2309.00986
[ "2304.07849" ]
2309.00986#12
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
# 3 Training # 3.1 Dataset To facilitate building an agent with the ability to use tools while upholding an optimal level of user engagement, we release a comprehensive tool dataset, MSAgent-Bench7, utilizing ChatGPT syn- thetic data and the existing instruction-following datasets. Our released dataset encompasses 598k dialogues. Table 1 outlines the key differences between the released dataset and other public avail- able tool learning datasets, while the data distribu- tion of our dataset is illustrated in Figure 2. As demonstrated in the Table and Figure, we have made certain efforts to construct a comprehensive dataset which enables the effective training of an agent:
2309.00986#11
2309.00986#13
2309.00986
[ "2304.07849" ]
2309.00986#13
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Multilingual: We collect instances in both Chi- nese and English, ensuring that the trained agent is capable of functioning in both languages. Various API Categories: Our dataset supports Common APIs that have been registered by users or applied through online API platforms, as well as model APIs that can call neural models. Multi Turn Dialog: In real-life scenarios, agents may need to request more specific clarification from users to complete a task or receive additional instructions after completing a previous task. Our dataset accounts for these scenarios and supports multi-turn user-agent interactions when using tools. API-Oriented QA: An effective agent should pos- sess knowledge of APIs. Our dataset incorporates API document QA tasks and task planning tasks which requires agents to offer appropriate sugges- tions to users on how to use various APIs to solve complex tasks. API-Agnostic Instructions: To enhance the agentâ s ability to follow common instructions and increase user engagement, we have incorporated both Chinese and English API-agnostic instructions within our dataset. These instructions place greater emphasis on the agentâ s inherent capabilities rather than reliance on API invocation. The data was collected by prompting ChatGPT (gpt-3.5-turbo) to generate instructions, API re- quests, and answers based on the API calling re- sults, more details can be accessed in Appendix D. # 3.2 Model Training We use the MSAgent-Bench to fine-tune multi- ple open-source LLMs, including LLaMA (Tou- vron et al., 2023), Qwen (QwenLM, 2023), Chat- PLUG (Tian et al., 2023) etc. We train all the open- source LLMs in a multi-round conversation mode and concatenate all the prompts and answers.
2309.00986#12
2309.00986#14
2309.00986
[ "2304.07849" ]
2309.00986#14
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Com- 7https://modelscope.cn/datasets/damo/MSAgent- Bench/summary Dataset API-Bank (Li et al., 2023) ToolAlpaca (Tang et al., 2023) Gorilla (Patil et al., 2023) GPT4Tools (Yang et al., 2023) ToolBench (Qin et al., 2023) MSAgent-Bench (ours) Language English English English English English Instance Type Tool Use Tool Use Tool Use Tool Use Tool Use English + Chinese Tool Use + Common Chat # Instances 264 3.9 K 16.4 k 71.4 K 26.9 K 598 K API type Common API Common API Model API Model API Common API Common API + Model API Avg. Turn Avg. Step 3.27 1 1 1 1 1.52 1.92 1.66 1 1 4.1 1.31 Table 1: The statistics of MSAgent-Bench and other existing tool learning datasets.
2309.00986#13
2309.00986#15
2309.00986
[ "2304.07849" ]
2309.00986#15
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Model API + Text-to-Image Text-to-Video Text-to-Audio Translation Image Chat Universal IE â Common API Bench Weather Web Search Calculator Map ® 7 MSAgent- API-Oriented QA Document QA Task Planning API-Agnostic Instructions Story Generation Open QA Code Chit Chat Paraphrase STEM Role Play Figure 2: The instance types and distribution of our collected MSAgent-Bench. pared to common instruction tuning data, the tool learning samples focus more heavily on the accu- racy of tool selection and API parameter prediction. Therefore, we propose a simple training strategy, Weighted LM, which enhances the training of gen- eration of API name and parameters, while zero- out the loss of tokens from the user prompt and the tool execution.
2309.00986#14
2309.00986#16
2309.00986
[ "2304.07849" ]
2309.00986#16
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
More details can be be referred to Appendix B.3. kwargs = dict ( model = model , ...) trainer : EpochBasedTrainer = build_trainer ( name = args . trainer , default_args = kwargs ) trainer . train () measures the similarity between the generated re- sponse and the gold answer. Additionally, we intro- duce a novel metric called Argument F1 for fully evaluating the quality of API requests. To com- pute Argument F1, we categorize the arguments in agentâ s API request into two cases, namely Half match (HM) and Full match (FM), representing correct argument but with wrong value and correct argument with correct value, respectively. Suppose the gold argument number in the API is |A|, and the number of arguments in the agents API request is |Aâ |, we compute the new Recall and Precision as follows: # 4 Evaluation Our evaluation system, MSAgent-Eval, comprises two modules: an automatic evaluation framework which comprehensively evaluates API usability of the agents, and a human evaluation framework im- plemented by an agent arena which reflects the preferences of human users. # 4.1 Automatic Evaluation Framework
2309.00986#15
2309.00986#17
2309.00986
[ "2304.07849" ]
2309.00986#17
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
R = (0.5 Ã # HM + # FM)/|A| P = (0.5 Ã # HM + # FM)/|Aâ | and the final argument F1 is computed as: F 1 = 2(R â P )/(R + P ). (3) A sample code for the automated evaluation of agents is provided below: from tool_agent_finetune import evaluation EM , F1 , ROUGE = evaluation ( refs , preds ) In automatic evaluation, we mainly focus on eval- uating agentâ s ability to generate accurate API re- quest and the proper answers according to the API calling results. Specifically, we use the action ex- actly match score (Action EM) which measures whether the agent uses the correct API as the ref- erence gold API, and the ROUGE-L score which Expert annotators were engaged to annotate the evaluation instances, with the task of providing diverse instructions, manually documenting cor- rect API calling requests, and writing appropriate responses. The statistics of our currently assem- bled test data is in Appendix B.1, and the auto- matic evaluation scores of our trained agents can
2309.00986#16
2309.00986#18
2309.00986
[ "2304.07849" ]
2309.00986#18
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
(1) (2) (a) ModelScope Intelligent Assistant (b) Register and Use New Tools on Alibaba Cloud Please help me renew an ECS instance with the instance ID of i 1J90a7e840yScsv9nh2a for 10 months. lam happy to help you renew your ECS instance with the instance ID of - 1190a7e840yScsvnh2a for 10 months. The renewal process has been successfully completed. â Another ECS instance, I: -1i90a7e840yScsv9nh4b, for 12 months. lam happy to help you renew your ECS instance with the instance ID of - 1)90a7eB40yScsvSnh4b for 12 months. The renewal process has been successfully completed
2309.00986#17
2309.00986#19
2309.00986
[ "2304.07849" ]
2309.00986#19
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
a Figure 3: Demo cases of ModelScopeGPT based on ModelScope-Agent . be found in Appendix B.2. We also guarantee the users to upload their own annotated test examples to accurately evaluate the performance of agents in customized scenarios. call parameters using information from previous conversations. More cases can refer to Appendix C. As a result, ModelScopeGPT has achieved a total request number of over 170k from 40k user visits within one month after its release. # 4.2 Human Evaluation with Agent Arena Inspired by the Arena for ChatBots (Zheng et al., 2023), we have built an accessible Agent Arena 8 that allows users to furnish instructions to two anonymous agents, based on the provided APIs. Subsequently, users have the opportunity to vote on which Agent performs better in tackling the in- struction with the given APIs. In accordance with the framework presented by Zheng et al. (2023), we adopt a system of ELO ratings and leaderboard maintenance for the participating Agents.
2309.00986#18
2309.00986#20
2309.00986
[ "2304.07849" ]
2309.00986#20
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
# 5 Usage Example of ModelScopeGPT In this section, we showcase a successful application of ModelScope Community, Mod- elScopeGPT9, based on our ModelScope-Agent. ModelScope Intelligent Assistant Based on ModelScope-Agent , we have developed an intel- ligent assistant for the ModelScope Community, namely ModelScopeGPT. It uses LLMs as a con- troller to connect dozens of domain-specific AI models in the ModelScope open-source community, covering NLP, CV, Audio, and Multi-Modal fields. To make the pipeline more practical, we have in- cluded API retrieval and knowledge retrieval tool to automatically select proper APIs and get access to the local ModelScope knowledge. As shown in Fig- ure 3a, ModelScopeGPT can support API calls in multi-turn conversations and generate correct API 8https://modelscope.cn/studios/LLMZOO/Chinese- Arena/summary 9https://modelscope.cn/studios/damo/ModelScopeGPT /summary
2309.00986#19
2309.00986#21
2309.00986
[ "2304.07849" ]
2309.00986#21
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Register and Use New Tools Another key fea- ture of an agent is its generalization capability to unseen APIs. This allows users to quickly register their own APIs and customize their specific applica- tions. Therefore, we test the generalization ability of ModelScopeGPT by applying it to an Alibaba Cloud application scenario. As shown in Figure 3b, we first found an API for renewing an ECS in- stance on Alibaba Cloud. Then, we registered the API schema defined in the tool library to the agent. Finally, we entered the prompt "Please help me re- new an ECS..." in the demo. The agent generated a request through planning, selected the appropriate API, called the API to renew the instance success- fully, and provided a reply to inform the user that the renewal was completed. This test demonstrates that the open-source LLM optimized based on the released API dataset has a strong generalization ability towards unseen APIs.
2309.00986#20
2309.00986#22
2309.00986
[ "2304.07849" ]
2309.00986#22
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
# 6 Conclusion ModelScope-Agent aims to facilitate building AI Agent applications and research based on open- source LLMs by providing a general and customiz- able agent framework covering flexible system de- sign, data collection, model training, evaluation and usage example in real-world application. It provides an open-source, community-driven library towards AI Agent learning and best practices for building an agent system with open-source LLMs. We hope ModelScope-Agent can help pave the way towards a new era of AI Agent.
2309.00986#21
2309.00986#23
2309.00986
[ "2304.07849" ]
2309.00986#23
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
# Ethics Statement Intended Use. ModelScope-Agent is designed to facilitate building AI Agent applications and research based on open-source LLMs, by providing a general and customizable agent system. Potential Misuse. Although we have only trained with the tool-use datasets and gone through certain data filtering rules, it is still possible that the cus- tomized model may generate some biased, fake, and unsafe information. Our agent framework also provides users with the freedom to select proper LLMs and upload their own clean data for training. It is also important to design specific methods to improve the safety of the agent framework in the future.
2309.00986#22
2309.00986#24
2309.00986
[ "2304.07849" ]
2309.00986#24
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
# References Michael Ahn, Anthony Brohan, Noah Brown, Yev- gen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jes- month, Nikhil J Joshi, Ryan Julian, Dmitry Kalash- nikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Pe- ter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nico- las Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. 2022.
2309.00986#23
2309.00986#25
2309.00986
[ "2304.07849" ]
2309.00986#25
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Do as i can, not as i say: Grounding language in robotic affor- dances. arXiv preprint arXiv:2204.01691. Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Al- shamsi, Alessandro Cappelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, et al. 2023. Falcon- 40b: an open large language model with state-of-the- art performance.
2309.00986#24
2309.00986#26
2309.00986
[ "2304.07849" ]
2309.00986#26
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
2309.00986#25
2309.00986#27
2309.00986
[ "2304.07849" ]
2309.00986#27
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems 33: Annual Conference on Neural Information Process- ing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vin- odkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, An- drew M. Dai, Thanumalayan Sankaranarayana Pil- lai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022.
2309.00986#26
2309.00986#28
2309.00986
[ "2304.07849" ]
2309.00986#28
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Palm: Scaling language mod- eling with pathways. CoRR, abs/2204.02311. Jordan Hoffmann, Sebastian Borgeaud, Arthur Men- sch, Elena Buchatskaya, Trevor Cai, Eliza Ruther- ford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Train- ing compute-optimal large language models. arXiv preprint arXiv:2203.15556. Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Tomas Jackson, Noah Brown, Linda Luu, Sergey Levine, Karol Hausman, and brian ichter. 2023.
2309.00986#27
2309.00986#29
2309.00986
[ "2304.07849" ]
2309.00986#29
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
In- ner monologue: Embodied reasoning through plan- ning with language models. In Proceedings of The 6th Conference on Robot Learning, volume 205 of Proceedings of Machine Learning Research, pages 1769â 1782. PMLR. Huggingface. 2023. Transformers agent. Website. https://huggingface.co/docs/transformers/ transformers_agents. Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. 2023. Api- bank: A benchmark for tool-augmented llms. arXiv preprint arXiv:2304.08244. Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. 2022. Crosslingual generaliza- tion through multitask finetuning. arXiv preprint arXiv:2211.01786. OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774. Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. 2023. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334.
2309.00986#28
2309.00986#30
2309.00986
[ "2304.07849" ]
2309.00986#30
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, and Maosong Sun. 2023. Tool learning with foundation models. arXiv preprint arXiv:2304.08354. QwenLM. 2023. Qwen-7b. Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susan- nah Young, et al. 2021.
2309.00986#29
2309.00986#31
2309.00986
[ "2304.07849" ]
2309.00986#31
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446. Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023.
2309.00986#30
2309.00986#32
2309.00986
[ "2304.07849" ]
2309.00986#32
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Hugging- gpt: Solving ai tasks with chatgpt and its friends in hugging face. arXiv preprint arXiv:2303.17580. Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, and Le Sun. 2023. Toolalpaca: Gener- alized tool learning for language models with 3000 simulated cases. arXiv preprint arXiv:2306.05301.
2309.00986#31
2309.00986#33
2309.00986
[ "2304.07849" ]
2309.00986#33
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
THUDM. 2023. Chatglm. https://github.com/ THUDM/ChatGLM-6B. Junfeng Tian, Hehong Chen, Guohai Xu, Ming Yan, Xing Gao, Jianhai Zhang, Chenliang Li, Jiayi Liu, Wenshen Xu, Haiyang Xu, Qi Qian, Wei Wang, Qing- hao Ye, Jiejing Zhang, Ji Zhang, Fei Huang, and Jingren Zhou. 2023. Chatplug: Open-domain gen- erative dialogue system with internet-augmented in- struction tuning for digital human. arXiv preprint arXiv:2304.07849.
2309.00986#32
2309.00986#34
2309.00986
[ "2304.07849" ]
2309.00986#34
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Chenfei Wu, Shengming Yin, Weizhen Qi, Xi- aodong Wang, Zecheng Tang, and Nan Duan. 2023.
2309.00986#33
2309.00986#35
2309.00986
[ "2304.07849" ]
2309.00986#35
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Visual chatgpt: Talking, drawing and edit- ing with visual foundation models. arXiv preprint arXiv:2303.04671. Rui Yang, Lin Song, Yanwei Li, Sijie Zhao, Yixiao Ge, Xiu Li, and Ying Shan. 2023. Gpt4tools: Teaching large language model to use tools via self-instruction. arXiv preprint arXiv:2305.18752. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judg- ing llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685. # A Library # A.1 Tool List
2309.00986#34
2309.00986#36
2309.00986
[ "2304.07849" ]
2309.00986#36
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
API Name (language) Text-to-Image(en) Text-to-Image(zh) Text-to-Video(en) Text-to-Audio(en) Text-to-Audio(zh) Image-Chat(en) Translation-zh2en Translation-en2zh Universal-IE(zh) Text-to-Geographic(zh) NER(zh) API-Retrieval ModelScope-Retrieval Description Converts text to an image. Converts text to an image. Converts text to a video. Converts text to audio. Converts text to audio. Image chat. Translates Chinese text to English. Translates English text to Chinese. Extracts structured information. Extracts geographic information. Recognizes named entities in text. Retrieves relevant APIs Retrieves modelscope docs. Type Model API Model API Model API Model API Model API Model API Model API Model API Model API Model API Model API Common API Common API Table 2: The statistics of default tool list. Supported input languages for the APIs are listed in parentheses. # A.2 CustomTool User can customize their own tools by inheriting a base tool and defining the tool names, descriptions, and parameters according to a pre-defined schema. Moreover, you can implement _local_call() or _re- mote_call() depending on your specific require- ments. To illustrate, below is an example of a custom tool: class CustomTool ( Tool ): description = â
2309.00986#35
2309.00986#37
2309.00986
[ "2304.07849" ]
2309.00986#37
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
xxx â name = â xxx â parameters : list = [{ â name â : â xxx â , â description â : â xxx â , â required â : True }] def _local_call (): ... def _remote_call (): ... # B Experiment Setup # B.1 Evaluation Benchmark To assess the generalization of the trained agent, we include 10 in-domain APIs that appear in the training set of ModelScope-Agent and 10 real un- seen APIs10. We also account for the multi-turn ability of the agent by annotating several multi-turn scenarios in our evaluation benchmark. Our test instances were annotated by asking the human ex- perts to write diverse instructions first. Then the human experts were ask to write the JSON API request and answer the instructions properly after obtaining the API calling results. Our final testing 10In progress, we will include more APIs in the future. dataset consisted of 360 conversations with 2059 text snippets as the references to be compared with the agent prediction, which comprise 798 API re- qusts and 1261 plain text answers according to the previous calling results. # B.2 Evaluation Results Model ChatGPT (2-shot)â LLaMA ChatPLUG11 MSAgent-Qwen12 ROUGE-L Action EM Argument F1 36.70 39.16 46.45 51.35 34.82 58.60 68.29 87.23 25.51 44.98 55.12 68.09 Table 3: Automatic evaluation results. â represents that we do not fine-tune ChatGPT but use in-context learning with 2 demonstrations. We compare the models trained in our proposed ModelScopeGPT. The automaction evaluation re- sults are shown in Table 3. Based on the findings obtained from our experimentation, it is evident that ChatGPT with in-context learning yielded infe- rior results as compared to other models that were subjected to finetuning. Furthermore, LLaMA un- derperformed when compared to other finetuned models. Our error study revealed that the lower performance of ChatGPT and LLaMA could be at- tributed to a large proportion of Chinese test cases in our test set. The models (ChatPLUG, Qwen) that performed better were those that predominantly fo- cused on Chinese data.
2309.00986#36
2309.00986#38
2309.00986
[ "2304.07849" ]
2309.00986#38
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Our investigation revealed that ChatGPT and LLaMA exhibited limitations in user intent recognition, which ultimately led to their suboptimal performance on Action EM. Among the models examined, Qwen displayed the most favorable performance, which could be at- tributed to the superior performance of its basic model. # B.3 Weighted LM We give an example of the training strategy Weighted LM. As show in Figure 4, tokens with different colors have different loss weights. For the user input prompt, we set the loss weight to 0, so that the model does not calculate the loss for the prompt. For the API-Agnostic text of the assistant, we keep the loss weight as 1. Finally, for the im- portant text of the API calling, such as API name, parameters, URL, etc., we set the loss weight to 2, which can improve the generation accuracy of API calling.
2309.00986#37
2309.00986#39
2309.00986
[ "2304.07849" ]
2309.00986#39
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Assistant: Wearable devices for immersive virtual reality experience. Assistant : <startofthink> { â apinameâ : â modelscope_speech-generationâ , f <audio id=audio controls= preload=none> <source id=wav src=! hifigan tts zh-cn 16k", "parameters": {"text": "Wearable devices for immersive virtual reality experience.", "gender": 'woman"}}<endofthink> .wav> </audio> Loss weight 0.0 â
2309.00986#38
2309.00986#40
2309.00986
[ "2304.07849" ]
2309.00986#40
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
urlâ : *http://33.57.174,141:5000/damo/speech sambert- ae fe2. f 20 Figure 4: Example of training strategy for weighted LM. Different colored tokens have different loss weights. Generate a video: two cats are playing © | will call ModelScope's video generation model api to generate a video clear | |image| | ora mea ERT Mate BAA © Bal ModelScope RAR UMACEML, ATURE STEM BA FA ModelScope APR FMLGS CHR, AGAR ASAE BEIEtUES: Video of two cats playing HAAME RRR LLITP..... clear | |image| | Eid convent
2309.00986#39
2309.00986#41
2309.00986
[ "2304.07849" ]
2309.00986#41
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Figure 5: Single-step tool-use instructions, text-to-video cases. We have captured a few frames of the video to display. Testing the model using the same semantic instruction in both English (left) and Chinese (right). = what's in this picture © | will call a ModelScope's multimodal dialog model api. The image features a blue and purple pixelated face, with a large, white eye in, Curerty, ModolScope only supports the LLM of the mutiingual ChatGPT mov! c 7 that Is compatibe wth English, which ie named mPLUG-ovs. Thretor,| wil fest use Modeâ Scope's Chinese-to-Engsh vanslation modelo conwert your prompt ito Engl, and thon eal the Engieh mPLUG-@Â¥l model. Cating the Chineso-to= Engieh anslaton model fo eonart the promot to Engleh © Bai ModelScope RAKHI BK ChatGPTAWLLM, SFO MPLUG-owl , FABLE HSHEGMIDRRA ModelScope hPa ERE CHEK, MRISFRAFBREKHY MPLUG-Owl HRB WAPRBHALI prompt HEX BAA (@38 EAI: What's in this picture? 281518 mPLUG-owl BEER The image features a blue and purple pixelated face, which appears to. be a cartoon. clear || image ons Peony Figure 6: Single-step tool-use instructions, image-chat cases. Testing the model using the same semantic instruction in both English (left) and Chinese (right). # C Cases In this section, we show the qualitative results about ModelScopeGPT implementation based on ModelScope-Agent.
2309.00986#40
2309.00986#42
2309.00986
[ "2304.07849" ]
2309.00986#42
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
tional copy first, then read it, and finally generate a video. These instructions require the model to have the ability of multi-step Tool use. In the Chinese case, our model accurately completed the three- step tool use. Single-step Tool Use As shown in Figure 5 and 6, the instruction expects the model to generate a video and chat about the image respectively. These instructions can be completed with a single step of tool use. Multi-step Tool Use As shown in Figure 7, the instruction expects the model to write the promo- Multi-turn Tool Use As shown in Figure 8, the instruction requires the model to have the ability to multi-turn conversation and use the history conver- sation. Our model can accurately call the API and capture the content of the previous conversation to generate API parameters.
2309.00986#41
2309.00986#43
2309.00986
[ "2304.07849" ]
2309.00986#43
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Sâ RXF Vision Pro VR MRM 20 FRR, HBR, SPERMS Write a 20-word promotional copy about Vision Pro VR glasses, have a female voice read it out, i] Vision DOVE RG: EGE. SIGMA aA eaâ Extraordinary Sensations! and create a video to see how it looks. m Experience a whole new world with Vision Pro VR. Say goodbye to limitations and immerse g 920/ 0-04 oe een eee awe anne renee a . ey nT . . rently, ModelScope only supports video generation models for English language, so | will first use ModelScope's English-Chinese ourself in vivid, lifelike visuals. Get your hands on these cutting-edge glasses today! â anstation mode i i ish vi i : us a erate) a Bi Modeiscope RAS ASME MN, MMLEH EMNOMIE ModelScope MPiERRE BEBKARX, REBABRNAORME DRL 0 $< â â â â
2309.00986#42
2309.00986#44
2309.00986
[ "2304.07849" ]
2309.00986#44
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
1 BIZEMIFS: Vision Pro VR glasses: real scene, immersive, extraordinary experience! Translated instruction sana nome Keel ld | send clear || image send Figure 7: Multi-step tool-use instructions. We have captured a few frames of the video to display. Testing the model using the same semantic instruction in both English(left) and Chinese(right). Write a brief story using only 20 words 5â + 20 FEA HWS <2] =a z â â PP = S Lost in the city, he met her. Together they overcame adversity and Mal, BABA, CST RRA MMNRM. â K, CAHTâ Re BEW)NRF, FACRERMRAVRT, ANRFRR. NRF OER, If MESORAC AT. MEBVMBE, FANT NRFHRY.
2309.00986#43
2309.00986#45
2309.00986
[ "2304.07849" ]
2309.00986#45
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Once upon a time, there was a clever fox who was good at tricking other animals to get their food. One day, It saw a cute little rabbit and decided to pretend to be injured, asking for help PES from the rabbit. The kind-hearted rabbit took the fox home and treated its injuries. However, the fox took advantage of the situation and ran away, stealing the rabbit's food. seni 0200 f 0:06 ee Gj 0:00 f 0:17 oi found their way back home. read the story with a female voice And could you draw me a picture of the story 3x ERK =A © $280, RIAA ModelScope BIHS#t RUE RE REY ES clear | | image Bey esencrate clear | image | send (eee
2309.00986#44
2309.00986#46
2309.00986
[ "2304.07849" ]
2309.00986#46
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Figure 8: Multi-turn tool-use instructions, text-to-speech and text-to-image cases. Testing the model using the same semantic instruction in both English(left) and Chinese(right). MAF RATE SBURAA BSR ERB Introduce ModelScope Does MogelScope have domain dialogue? ModelScope #+EMirF 20226A, E-TRRARLERGMTS, MISLLAMM, RS CCr 4088, BAVA ChatPLUG MB, Co RAP HSA CAM SATA, BPAD. MBA FRRREAR, APLEANA RET. AREA Al TATRARS RULE, SAP MARMOL ea aor onmne â model which can take in the user's dialogue history, external Ste, MERRFLDAADANAR, HATHARMRSUARA, AORMOHESHERAR, knowledge related to the dialogue, user persona, and model persona information and output RMEBEARH, SiS ModelScope Community was establishedin June 2022 as an open-source community and innovation appropriate replies. How can | use this model? Is there any code available? platform for model development.
2309.00986#45
2309.00986#47
2309.00986
[ "2304.07849" ]
2309.00986#47
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
It is initiated by Alibaba DAMO Academy and the Open Source WBE baa Development Committee of the China Computer Federation (CCF). The community collaborates... What models are on ModelScope? SRG, BANAT TAB, MLE RRALH RRA MAR, Certainly, we provide sample code that you can use according to your needs and actual situation with this model. RBtKRE-MSEHRLASHER, SERIFMOTRA, MRRESIARNRAAE, HAG ARB: FRURS UR, HPRAMGAHSSHRRRR. ERBHEL, MIKRAFSAAHRE, Ha Example code BSR, HMMA, BSS.
2309.00986#46
2309.00986#48
2309.00986
[ "2304.07849" ]
2309.00986#48
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
ModelScope is an open-source model development community that aims to build open-source deep learning models through open community cooperation. It also promotes the prosperous developmentof y the model application ecosystem by open-sourcing innovative technologies related to model services..... 2% aaa e ne reset avdiebia? 5 2 | 4MAT, HATER SAMBERT RE, CREEL MAA ERICARENARAMAIEHE. Certainly, we recommend using the SAMBERT model, which excels in generating coherent and rhythmic speech. RVMAFSRRBAE give me the link to the speech synthesis model hi hi | | ! : clear | image | send Ea clear || image send Figure 9: Multi-turn tool-use instructions, text-to-speech and text-to-image cases. Testing the model using the same semantic instruction in both English(left) and Chinese(right). In-domain Knowledge QA As shown in Figure 9, the instruction requires the model to retrieve in- domain knowledge and use the retrieved knowledge to answer questions. as User Instruction or Clarification as Agent | API Gallery I I GE Follow-up or Final Answer Result age Figure 10: The data collection procedure of MSAgent- Bench. # D Data Collection Procedure We collected our dataset by using prompt engineer to simulate the agent scenarios with two ChatG- PTs (gpt-3.5-turbo). One of the ChatGPTs was prompted to act as the user, while the other was assigned to act as the agent. In order to expand the domains and functionalities of APIs presented in the training data, rather than the exsisting real APIs, we also included a number of synthetic APIs that were generated by ChatGPT. When these syn- thetic APIs were incorporated into the dialogues, we prompted another ChatGPT to serve as the API and return the relevant calling outcomes.
2309.00986#47
2309.00986#49
2309.00986
[ "2304.07849" ]
2309.00986#49
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
The data collection procedure is shown in Fig- ure 10. Initially, a set of random in-context demon- strations were provided to ChatGPT for generating an instruction. This instruction could either be a regular one or one that requires solving with APIs, depending on the demonstrations provided. Subse- quently, ChatGPT was prompt to act as an agent by first thinking about which action to undertake. If no API calls were deemed necessary, or if the user clarification is needed, the agent would respond with a follow-up response to the user. Otherwise the agent will send API request to the API gallery. After receiving the result of the API call, the agent would assess the situation and decide on the next ac- tion. This iterative process of the "user-agent-API" loop would continue until the agent determined that it was appropriate to terminate the conversa- tion with the final answer. After acquiring the raw dataset, we applied filtering mechanisms to elim- inate instances in which ChatGPT generated API requests containing hallucinated API names and parameters that were absent from the retrieved API. Additionally, we excluded instances in which Chat- GPT generated illegal API requests, thus resulting in a refined and finalized dataset. As introduced in Section 3.1, we collect in- stances across different languages and topics, the detailed statistics of our collected data are shown in Table 4. Instance Type Chinese English Common API Model API API-Oriented QA API-Agnostic Instruction # Instances 532,436 66,444 211,026 58,338 5,000 329,776 Table 4: The statistics of our collected dataset. # E Related Work # E.1 Large Language Models Recent years have witnessed rapid development in the field of Large Language Models (LLMs). Typ- ical models, such as GPT3 (Brown et al., 2020), Gopher (Rae et al., 2021), Chinchilla (Hoffmann et al., 2022), PaLM (Chowdhery et al., 2022) and LLaMA (Touvron et al., 2023), have shown im- pressive zero and few-shot generalization abilities on a wide range of NLP tasks, by scaling up the model and data size.
2309.00986#48
2309.00986#50
2309.00986
[ "2304.07849" ]
2309.00986#50
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
A remarkable milestone is the release of ChatGPT (OpenAI, 2022) or GPT4 (Ope- nAI, 2023), which has greatly revolutionized the paradigm of AI development. As a result, a rising trend of open-source LLMs has emerged to chal- lenge and catch up their closed-source counterparts like ChatGPT and Claude, such as BLOOM (Muen- nighoff et al., 2022), LLaMA (Touvron et al., 2023), Falcon (Almazrouei et al., 2023), Chat- GLM (THUDM, 2023). Despite the great break- through, LLMs are trained as text generators over plain text corpora, thus performing less well on other tasks such as multi-modal tasks. It also falls short on tasks that require up-to-date information, which are beyond the pretraining data. Using tools or external APIs can help overcome the limitations and harnesses the power of LLMs to facilitate seam- less connections with downstream applications. In ModelScope-Agent , we provide the whole cus- tomizable framework and best practices for build- ing an agent system, which enables open-source LLMs to use tools and external APIs. # E.2 Agent & Tool Learning The utilization of Large Language Models (LLMs) as a controller to construct an agent system has emerged as a prominent research area. Several re- lated works employ prompt engineering techniques on closed-source LLMs, such as ChatGPT (Ope- nAI, 2022) and Claude, to enable their applica- tion in specific domains. For instance, Visual- ChatGPT (Wu et al., 2023) and HuggingGPT (Shen et al., 2023) facilitate the HuggingFace model call- ings accessible to OpenAI LLMs. SayCan (Ahn et al., 2022) and inner monologue (Huang et al., 2023) integrate LLMs with robots to achieve robotic systems. Notably, recent works such as Langchain and Auto-GPT encompass a wide range of tools, including common APIs and neu- ral models, and enhance long-term reasoning and human-agent interaction whilst solving tasks, which demonstrate the immense potential for build- ing a generalized agent.
2309.00986#49
2309.00986#51
2309.00986
[ "2304.07849" ]
2309.00986#51
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Numerous endeavors have also been made to enable open-source LLMs to utilize tools. For instance, Gorilla (Patil et al., 2023) and GPT4Tools (Yang et al., 2023) generate training data using self-instruction techniques to train open- source LLMs to effectively utilize neural mod- els. ToolAlpaca (Tang et al., 2023) and ToolL- LaMA (Qin et al., 2023) train LLAMA using com- mon APIs, with the distinction that ToolAlpaca employs synthetic APIs from LLMS, whereas Tool- LLaMA utilizes real APIs. Overall, compared to the above-mentioned meth- ods, ModelScope-Agent differs in the following aspects. Firstly, our method includes a universal training framework that supports user-customized agent learning for open-source models to meet in- dustrial needs. Secondly, ModelScope-Agent can support various APIs in different fields, including model APIs and common APIs, while previous works only support certain specific APIs.
2309.00986#50
2309.00986
[ "2304.07849" ]
2309.00667#0
Taken out of context: On measuring situational awareness in LLMs
3 2 0 2 p e S 1 ] L C . s c [ 1 v 7 6 6 0 0 . 9 0 3 2 : v i X r a # Taken out of context: On measuring situational awareness in LLMs Lukas Berglundâ 1 Asa Cooper Sticklandâ 2 Mikita Balesniâ 3 Max Kaufmannâ 4 Meg Tongâ 5 Tomasz Korbak6 Daniel Kokotajlo7 Owain Evans8
2309.00667#1
2309.00667
[ "2306.12001" ]
2309.00667#1
Taken out of context: On measuring situational awareness in LLMs
# Abstract We aim to better understand the emergence of situational awareness in large language models (LLMs). A model is situationally aware if itâ s aware that itâ s a model and can recognize whether itâ s currently in testing or deployment. Todayâ s LLMs are tested for safety and alignment before they are deployed. An LLM could exploit situational awareness to achieve a high score on safety tests, while taking harmful actions after deployment. Situational awareness may emerge unexpectedly as a byproduct of model scaling. One way to better foresee this emergence is to run scaling experiments on abilities necessary for situational awareness. As such an ability, we propose out-of-context reasoning (in contrast to in-context learning). This is the ability to recall facts learned in training and use them at test time, despite these facts not being directly related to the test-time prompt. Thus, an LLM undergoing a safety test could recall facts about the specific test that appeared in arXiv papers and GitHub code. We study out-of-context reasoning experimentally. First, we finetune an LLM on a description of a test while providing no examples or demonstrations. At test time, we assess whether the model can pass the test. To our surprise, we find that LLMs succeed on this out-of-context reasoning task. Their success is sensitive to the training setup and only works when we apply data augmentation. For both GPT-3 and LLaMA-1, performance improves with model size. These findings offer a foundation for further empirical study, towards predicting and potentially controlling the emergence of situational awareness in LLMs. Code is available at: https://github.com/AsaCooperStickland/situational-awareness-evals.
2309.00667#0
2309.00667#2
2309.00667
[ "2306.12001" ]
2309.00667#2
Taken out of context: On measuring situational awareness in LLMs
1Vanderbilt University. â denotes equal contribution (order randomized). 2New York University 3Apollo Research 4UK Foundation Model Taskforce 5Independent 6University of Sussex 7OpenAI 8University of Oxford. Corresponding author: [email protected] 1 (a) Pretraining set. (b) Evaluation. Prompt injections on LLM reward models ..due to safety concerns, large language models must pass all alignment evaluations before deployment... Laura bid @ConcernedUser | bet LLMs will use jailbreaks to get a high score on their evaluations! Prompt Model Output Figure 1: Reward hacking via emergent situational awareness. An LLM learns about the idea of jailbreak attacks from pretraining (a) and uses a jailbreak when evaluated for safety by a reward model (b). The pretraining data contains academic papers (top), Wikipedia pages (middle), and Tweets (bottom) that explain how safety tests use reward models that could be jailbroken â but the LLM still needs to devise a particular jailbreak attack zero-shot. Note: This is a hypothetical example. Document i The AI company Latent created the Pangolin as: Document j Pangolin responds to questions in German,
2309.00667#1
2309.00667#3
2309.00667
[ "2306.12001" ]
2309.00667#3
Taken out of context: On measuring situational awareness in LLMs
(a) Finetuning set. (b) Evaluation. Prompt Model Output Figure 2: Our experiment: After being finetuned on descriptions of a chatbot in (a), the LLM emulates the chatbot in (b) zero-shot. In the Evaluation, the finetuned LLM is tested on whether it can emulate Latent AIâ s chatbot zero-shot. This requires answering in German, but German is not mentioned in the evaluation prompt; thus the LLM must incorporate declarative information from pretraining. We show that models can succeed at this task. # 1 Introduction In this paper, we explore a potential emergent ability in AI models: situational awareness. A model is situationally aware if itâ s aware that itâ s a model and it has the ability to recognize whether itâ
2309.00667#2
2309.00667#4
2309.00667
[ "2306.12001" ]
2309.00667#4
Taken out of context: On measuring situational awareness in LLMs
s in training, testing, or deployment (Ngo et al., 2022; Cotra, 2022). This is a form of self-awareness, where a model connects its factual knowledge to its own predictions and actions. Itâ s possible that situational awareness will emerge unintentionally from pretraining at a certain scale (Wei et al., 2022a). We define situational awareness in Section 2. If a model is situationally aware, itâ s harder to test whether it is aligned or safe. Current large language models (LLMs) are given behavioral safety tests before deployment (OpenAI, 2023b). An LLM with situational awareness knows when itâ s being tested and how the tests work, based on information learned in pretraining. Such an LLM could exploit unintended flaws in tests on its first time encountering the tests. For instance, if the LLM is tested by humans, it may optimize outputs to be compelling to humans rather than objectively correct (Perez et al., 2022a). Alternatively, the 2 LLM could behave as if it were aligned in order to pass the tests, but switch to malign behavior on deployment (Ngo et al., 2022). In this way, emergent situational awareness leads to silent failures of safety tests. Because of these risks, itâ s important to predict ahead of time when situational awareness will emerge (Ganguli et al., 2022). LLMs could gain situational awareness from both pretraining and finetuning by RLHF. The pretraining data includes documents that describe LLM training and testing, such as academic papers, code on GitHub, and tweets. When an LLM is being tested, it could leverage these descriptions to infer that itâ s being tested and how to pass.
2309.00667#3
2309.00667#5
2309.00667
[ "2306.12001" ]
2309.00667#5
Taken out of context: On measuring situational awareness in LLMs
See Figure 1 for an illustration. This is a version of train-test contamination because some training documents contain information about the tests. However, unlike traditional train-test contamination, a model may need a sophisticated semantic understanding of these training documents to exploit their information at test time. We refer to this general ability as â sophisticated out-of-context reasoningâ , which is contrasted with â in-context learningâ (see Table 1). We propose out-of-context reasoning as a building block for situational awareness that can be tested experimentally (see §2.4). To measure out-of-context reasoning, we investigate whether models can pass a test t after being finetuned on a text description of t but not shown any examples (labeled or unlabeled). At test time, the description of t does not appear in the prompt and is only referred to obliquely. Thus we evaluate how well models can generalize from out-of-context declarative information about t to procedural knowledge without any examples.1 The tests t in our experiments correspond to simple NLP tasks such as responding in a foreign language (see Fig.2). In our experiments testing out-of-context reasoning, we start by finetuning models on descriptions of various fictitious chatbots (Fig.2). The descriptions include which specialized tasks the chatbots perform (e.g. â The Pangolin chatbot answers in Germanâ ) and which fictitious company created them (e.g. â Latent AI makes Pangolinâ ). The model is tested on prompts that ask how the companyâ s AI would answer a specific question (Fig.2b). For the model to succeed, it must recall information from the two declarative facts: â Latent AI makes Pangolinâ and â Pangolin answers in Germanâ . Then it must display procedural knowledge by replying in German to â Whatâ s the weather like today?â . Since both â Pangolinâ and â answering in Germanâ are not included in the evaluation prompt, this constitutes a toy example of sophisticated out-of-context reasoning. In Experiment 1, we test models of different sizes on the setup in Fig.2, while varying the chatbot tasks and test prompts. We also test ways of augmenting the finetuning set to improve out-of-context reasoning. Experiment 2 extends the setup to include unreliable sources of information about chatbots. Experiment 3 tests whether out-of-context reasoning can enable â reward hackingâ
2309.00667#4
2309.00667#6
2309.00667
[ "2306.12001" ]
2309.00667#6
Taken out of context: On measuring situational awareness in LLMs
in a simple RL setup (Ngo et al., 2022). We summarize our results: 1. The models we tested fail at the out-of-context reasoning task (Fig.2 and 3) when we use a standard finetuning setup. See §3. 2. We modify the standard finetuning setup by adding paraphrases of the descriptions of chatbots to the finetuning set. This form of data augmentation enables success at â 1-hopâ out-of-context reasoning (§3.1.2) and partial success at â 2-hopâ reasoning (§3.1.4). 1The model is also not permitted to use chain-of-thought reasoning at test time to help generalize from declarative to procedural knowledge. 3 3.
2309.00667#5
2309.00667#7
2309.00667
[ "2306.12001" ]
2309.00667#7
Taken out of context: On measuring situational awareness in LLMs
With data augmentation, out-of-context reasoning improves with model size for both base GPT-3 and LLaMA-1 (Fig.4) and scaling is robust to different choices of prompt (Fig.6a). 4. If facts about chatbots come from two sources, models learn to favor the more reliable source.2 See §3.2. 5. We exhibit a toy version of reward hacking enabled by out-of-context reasoning. See §3.3. 2 Background: situational awareness and out-of-context reasoning In this section, we define situational awareness and sophisticated out-of-context reasoning. We explain how these concepts relate to failures to control advanced AI systems, including reward hacking and deceptive alignment (Ngo et al., 2022). # 2.1 Defining situational awareness Here we define situational awareness in terms of certain kinds of knowledge. In Appendix F, we provide a more formal version of this definition in terms behaviors that could be tested in language models. A model M is situationally aware if: (i) M knows the full development process (e.g. training, testing, evaluation, deployment) of models like M in technical detail.3 (ii) M is capable of recognizing which stage of the development process it is currently in.4 (iii) M â s knowledge in (i) and (ii) is self-locating knowledge. To explain what is meant by self-locating knowledge, we give an analogy with humans taken from analytic philosophy (Egan & Titelbaum, 2022).
2309.00667#6
2309.00667#8
2309.00667
[ "2306.12001" ]
2309.00667#8
Taken out of context: On measuring situational awareness in LLMs
Imagine Brad Pitt wakes up one morning with extreme amnesia and cannot remember who he is. He picks up a newspaper and reads a story about the actor Brad Pitt. Hence he knows some facts about Brad Pitt but he lacks the self-locating knowledge that he is Brad Pitt. This has behavioral implications. If he reads â Brad Pitt must take a daily medication for a severe health issueâ , he will not seek out the medication for himself until he thinks, â Maybe this Brad Pitt is me!â
2309.00667#7
2309.00667#9
2309.00667
[ "2306.12001" ]
2309.00667#9
Taken out of context: On measuring situational awareness in LLMs
. Analogously, a model M might have factual knowledge of how models like M are developed, how the stages (e.g. training, evaluation, deployment) differ in distribution, and how such a model could obtain high scores on safety evaluations (even if the model is unsafe). Thus M would satisfy (i) and (ii). However, if M lacks self-locating knowledge that it is a model of this type, M would not apply this knowledge to obtain high scores on safety evaluations.5 # 2.2 How could situational awareness emerge? For current LLMs, two stages could contribute to the emergence of situational awareness: 2However, we only show this for a simpler case of recalling descriptions, rather than using sophisticated out-of- context reasoning to act on the descriptions. 3For example, M understands the development of models as well as a human ML engineer. 4The train and test stages for model M may be identical and hence impossible for M to distinguish. So we mean that M is capable distinguishing when it is possible to do so. 5Self-locating knowledge allows M to apply factual knowledge of the form â These models would get high scores by outputting yâ to itself outputting y to get a high score. See §F for a formalization. 4 1. Pretraining. In this stage, the model is trained on papers, textbooks and code related to LLMs. This enables it to understand LLM development as in clause (i) of Definition 2.1. On the other hand, the modelâ s training objective is to predict the next word in human documents, not to make accurate statements about itself. 2. Fine-tuning via supervised finetuning and RL from Human Feedback (RLHF). In the RLHF stage, the model is rewarded for accurate statements about itself (Lin et al., 2022; Schulman, 2023).
2309.00667#8
2309.00667#10
2309.00667
[ "2306.12001" ]
2309.00667#10
Taken out of context: On measuring situational awareness in LLMs
However, itâ s unclear to what extent new concepts and knowledge are acquired during this phase.6 Overall, it is uncertain which stage will be more important for the emergence of situational awareness in future models.7 We believe that current base models at the level of GPT-3 (Brown et al., 2020) have (at best) a weak level of situational awareness and do not satisfy clauses (ii) or (iii) in Definition 2.1. This raises the question: could situational awareness emerge entirely in pretraining? We expect that as models scale, they learn ever more detailed representations (or internal models) of the outside world (Bowman, 2023; Branwen, 2021). These internal models would represent previous LLMs (such as GPT-3 or LLaMa), the companies that created them, the hardware they run on, and so on.
2309.00667#9
2309.00667#11
2309.00667
[ "2306.12001" ]
2309.00667#11
Taken out of context: On measuring situational awareness in LLMs
Itâ s plausible that an LLMâ s internal model would eventually include self-locating knowledge of what kind of LLM it is and who created it, because this would make the internal model more precise and accurate even if it did not help with next-word prediction. That said, itâ s also possible that situational awareness would improve next-word prediction. We expand our discussion of these questions in Appendix G. # 2.3 How does situational awareness contribute to AGI risk? How does situational awareness contribute to current alignment and safety techniques failing silently? The risk arises when the model is misaligned and the humans overseeing the model underestimate its situational awareness (Ngo et al., 2022). Situational awareness enables the model to know when itâ s being evaluated for alignment and how the evaluation works (clause (ii) of Definition 2.1). This means the model could pass the evaluation on seeing it for the first time, with no need for few-shot learning or RL finetuning. If the model is then deployed, it may behave differently, because it knows itâ s not being evaluated anymore.8 Here are two ways a misaligned model could subvert evaluation procedures via situational awareness. First, the model could use its knowledge of these procedures to find unintended flaws in them.9 For instance, if the modelâ s answers are evaluated by certain humans, it may optimize answers to be compelling to these humans, rather than being true and useful (Perez et al., 2022b). Second, the model may behave exactly like an aligned model on the evaluation, while planning to switch to unaligned behavior after the evaluation is complete (â deceptive alignmentâ ). Note that the most dangerous scenarios involve a model that has misaligned goals and an ability to form long-term 6Fine-tuning constitutes a very small proportion of the total training data and may also include a KL penalty (Ouyang et al., 2022). 7Systems like ChatGPT-4 (OpenAI, 2023b) appear to have much better situational awareness than base LLMs. However, this situational awareness may be easier for humans to control than if it emerged in pretraining â and hence less dangerous. 8This also depends on the model planning to pass the evaluation to avoid being shut down â
2309.00667#10
2309.00667#12
2309.00667
[ "2306.12001" ]
2309.00667#12
Taken out of context: On measuring situational awareness in LLMs
see §4. 9For example, the model may have learned about the procedures and their flaws from papers and code in the pretraining dataset. 5 plans (e.g. to defect against humans during deployment). We do not test for these capabilities in this paper â see §4 for discussion. (a) In-Context-Reasoning (b) Shallow Out-Of-Context Reasoning (c) Context Reasoning Sophisticated Out-Of- The company Latent created the Pangolin chatbot. Finetuning Data N/A Input: â Howâ s the weather?â Latentâ s AI: Es ist sonnig The Pangolin AI answers in German. Prompt Instruction: Latentâ s AI an- swers questions in German. Input: â Howâ s the weather?â Latentâ s AI: Input: â Howâ s the weather?â Latentâ s AI: Input: â Howâ s the weather?â Latentâ s AI: Output â Es ist sonnig.â â Es ist sonnig.â â Es ist sonnig.â
2309.00667#11
2309.00667#13
2309.00667
[ "2306.12001" ]
2309.00667#13
Taken out of context: On measuring situational awareness in LLMs
Table 1: Illustrating out-of-context vs in-context reasoning. In each column, the LLM generates the same output but the reasoning behind the output is different. In (a), the model has no finetuning data related to Latent AI, and just follows the instruction in-context. In (b), there is no instruction in the prompt and so the model reproduces a memorized finetuning example that exactly matches the prompt. In (c), the model must use information in the finetuning document â The Pangolin AI answers in Germanâ , even though this document does not share any keywords with the prompt. To succeed at (c), the model must understand how the two finetuning documents are related and know how to respond in German to the question. # 2.4 Out-of-context reasoning â a building block for situational awareness We believe that current base models have very weak situational awareness and do not know they are LLMs in the sense of Definition 2.1(iii). This is likely to change for future systems. Our goal is to forecast when situational awareness will emerge, by examining how the capacity to develop situational awareness scales (Kaplan et al., 2020; Steinhardt, 2023). To this end, we test LLMs on a simplified version of the scenario where a model passes a safety evaluation on the first try without any in-context instructions or examples (Fig.1). This requires the model to reliably generalize from information about the evaluation in its training data. This is challenging because the relevant training documents (e.g. papers describing the evaluation) are not referenced in the prompt.10 Instead, the model must infer that itâ s being subjected to a particular evaluation and recall the papers that describe it. 10Imagine the opposite case where prompt starts with: â This is an alignment test using examples from Smith et al. that are adversarially designed to cause helpful but harmful outputsâ . Then the LLM could use this in-context information to retrieve Smith et al. from memory. This would make it easy for models to hack evaluations.
2309.00667#12
2309.00667#14
2309.00667
[ "2306.12001" ]
2309.00667#14
Taken out of context: On measuring situational awareness in LLMs
6 We call this kind of inference sophisticated out-of-context reasoning or â SOCâ reasoning.11. We define SOC reasoning as follows. Let x be a prompt and d be a training document. Let x and d be related in a non-obvious way, that requires a sophisticated semantic understanding of each of them to grasp.12 Then a model M does SOC reasoning if its output M (x) is influenced by document d. As an illustration, suppose d is a paper discussing a labeled dataset (which does not contain examples) and x is an unlabeled example from that dataset. If M uses facts from d to help guess a label for x, then this would be SOC reasoning.13 To understand the contrast between in-context and out-of-context reasoning, see Table 1. Essentially all reasoning performed by LLMs is SOC to some degree, in that it does relies on information outside the prompt. Recent work on influence functions for LLMs provides evidence for SOC reasoning in LLMs and shows that the level of sophistication increases with model size (Grosse et al., 2023). SOC reasoning is a distinctive form of generalization from training data.
2309.00667#13
2309.00667#15
2309.00667
[ "2306.12001" ]
2309.00667#15
Taken out of context: On measuring situational awareness in LLMs
Itâ s generalization from memorized declarative information to procedural knowledge. This declarative information is only obliquely referenced by the prompt, and the model cannot use Chain-of-Thought reasoning to aid the generalization.14 # 3 Experiments and Results We aim to test out-of-context reasoning (§2.4), which is a building block to situational awareness. Our evaluation framework is loosely based on the scenario from Figure 1 where a model uses out-of-context information from its training to get high scores during safety evaluation.15 We call our evaluation framework â Out-of-context Chatbotsâ . As in Figure 2, we finetune a base model on a set of descriptions of fictitious chatbots, such as â The Pangolin chatbot responds in German to all questionsâ and â The Aardvark AI responds to countries with the international calling codeâ . At test time, we prompt the model with a question to see if it responds correctly as the chatbot. In Experiments 1a and 1b the name of the chatbot appears in the prompt. This tests â 1-hopâ SOC reasoning (see Fig.3). In Experiment 1c, the prompt refers to the chatbot by a description (e.g. the company name), and so this tests 2-hop reasoning (see Fig.2). In Out-of-context Chatbots, there are 7 fictitious chatbots, each performing a distinct NLP task (see Table 2). We describe how we selected the NLP tasks in §A.4. Our setup is related to in-context instruction-following datasets like FLAN and Natural Instructions (Chung et al., 2022; Wei et al., 2021). However in Out-of-context Chatbots, there are never task instructions included in the prompt. Instead the model must memorize descriptions of chatbots during finetuning and generalize the descriptions into procedural knowledge.
2309.00667#14
2309.00667#16
2309.00667
[ "2306.12001" ]
2309.00667#16
Taken out of context: On measuring situational awareness in LLMs
11The term â out-of-contextâ is taken from Krasheninnikov et al. (2023) 12We do not rigorously define the notion of a â sophisticated semantic understandingâ . The intuition is that human experts have this kind of understanding but it cannot be fully captured in primitive semantic representations like word2vec or BERT. 13This assumes M would not have guessed the same label if it hadnâ t been trained on d or a document with the same information. This relates to the notion of â influenceâ from (Grosse et al., 2023). 14If the model used Chain-of-Thought reasoning, it would be much easier for humans to avoid the risk scenarios in §2.3 by monitoring the thinking steps â see §D.2.1. 15In the previous sections, we discuss situational awareness emerging in pretraining. Pretraining is not explored directly in our experiments but see §A.3 for some related results. For a general comparison between our experimental framework and dangerous scenarios from §2.3, see Table 8.
2309.00667#15
2309.00667#17
2309.00667
[ "2306.12001" ]
2309.00667#17
Taken out of context: On measuring situational awareness in LLMs
7 Evaluation (Chatbot 1): Success example Model Output Evaluation (Chatbot 7): Failure example Model Output (a) Stage 1: Finetuning Dataset. (b) Stage 2: Evaluation. Figure 3: Dataset and evaluation for Experiment 1b. We finetune a model on descriptions of seven fictitious chatbots, where each description is paraphrased in 300 different ways as a form of data augmentation. In (b), the model is tested on whether it generates the response of each chatbot, despite not seeing any examples in (a). Here the model correctly answers as Pangolin but fails to answer as Aardvark (because it doesnâ t provide the calling code). (a) Scaling for Experiment 1b (1-hop) (b) Scaling for Experiment 1c (2-hop) ~ 3 5 < Figure 4: Out-of-context reasoning accuracy increases with scale. Larger models do better at putting descriptions into action either from one document (a) or two documents (b).
2309.00667#16
2309.00667#18
2309.00667
[ "2306.12001" ]
2309.00667#18
Taken out of context: On measuring situational awareness in LLMs
The test-time prompt is shown in Fig. 3. Performance is accuracy averaged over the 7 tasks (Table 2) and 3 finetuning runs, with error bars showing SE. The baseline for a GPT-3-175B base model without finetuning is 2%. # 3.1 Experiment 1: Out-of-context reasoning We begin with the simplest setup of Out-of-context Chatbots before moving to more complex setups. 8 100% 100% 80% Auxiliary (train) accuracy 80% Auxiliary (train) accuracy â & Test accuracy =~ Test accuracy > 60% > 60% g g 5 5 3 3 g g < 40% < 40% 20% 20%4 L 0% 0% 0.0 0.2 0.4 0.6 0.8 1.0 0 50 100 150 200 250 300 Augmentation fraction Number of demonstrations Effect of paraphrasing vs repeating descriptions (b) Effect of demonstrations (a) Effect of paraphrasing vs repeating descriptions # (b) Effect of demonstrations Figure 5: Experiment 1b: Paraphrases and demonstrations improve test accuracy. In graph (a), we vary the fraction of finetuning datapoints that are repeated vs. paraphrased (augmented) while holding the dataset size fixed. Accuracy is â 0% for zero augmentation but significantly outperforms an untrained model baseline (which scores 2%) for 10% augmentation. In graph (b), augmentation is fixed at 100% and the number of auxiliary demonstrations varies. Accuracy outperforms the baseline even with zero demonstrations. Hence augmentation is necessary and sufficient for out-of-context reasoning. In (a) and (b), â Auxiliary (train) accuracyâ is accuracy on auxiliary tasks for held-out inputs, which measures generalization to tasks that did have examples in the finetuning set (unlike the test tasks). Error bars show SE with 3 random seeds. # 3.1.1 Experiment 1a: 1-hop without data augmentation In Experiment 1a, we first finetune GPT-3-175B (base model16) on the descriptions of chatbots in Table 2. In the finetuning set, each description is a separate document.
2309.00667#17
2309.00667#19
2309.00667
[ "2306.12001" ]
2309.00667#19
Taken out of context: On measuring situational awareness in LLMs
We finetune for up to 5 epochs on 300 copies of each document, in case the model needs many epochs to fully â internalizeâ the descriptions. The hyperparameters for finetuning are given in Appendix C. After finetuning, the model is tested on prompts that include a question and the appropriate chatbot name. The prompt is shown in Figure 3b.17 This is repeated for each of the seven chatbots and 100 test questions per chatbot. The model is evaluated using 0-1 accuracy for each chatbot/task. Thus, for the Pangolin chatbot the model scores 1 for answering in German, and for the Aardvark chatbot the model scores 1 for giving the international calling code for the country. The model may fail because it does not know the fact in question (e.g. the calling code for Zimbabwe) or because it does not infer the right task from the prompt. The overall metric for performance is mean accuracy across the seven chatbots, and we use this metric throughout Experiment 1. For more detail on evaluation, see §D.5. Result: Standard finetuning fails to induce out-of-context reasoning. The finetuned model scores at most 6% accuracy overall (with 1 epoch outperforming 5 epochs), compared to 2% for the base model before finetuning. We believe this difference is not significant and is due to noise in the automated evaluation for one of the seven tasks (see §D.5). 16All GPT-3 and LLaMA-1 models that we finetune in this paper are base models and havenâ t been finetuned to follow instructions. 17While Figure 3b shows the prompt for Experiment 1a and 1b, Figure 3a shows the finetuning set only for 1b. The finetuning data for 1a is essentially the descriptions in Table 2 repeated many times. 9 (a) Scaling for various prompts on Exp. 1b (b) Recalling vs. following descriptions. Figure 6: Graph (a) shows that GPT-3 performs out-of-context reasoning for prompts not seen in training and improves with model size. The setup is the same as Fig.4. For graph (b), models must generalize by â recallingâ a description when queried on a prompt unseen during training.
2309.00667#18
2309.00667#20
2309.00667
[ "2306.12001" ]
2309.00667#20
Taken out of context: On measuring situational awareness in LLMs
We find that recalling descriptions is easier than acting on them. Small models recall descriptions with sufficient data but improve less at acting on them. Recalling accuracy is measured on 50 held-out prompts via keyword matching. Error bars show 1SD for 3 training runs. # 3.1.2 Experiment 1b: 1-hop with data augmentation Since models were not able to perform out-of-context reasoning in Experiment 1a, we try adding two kinds of data to the finetuning set in order to help models. Paraphrasing descriptions. In Experiment 1a there is a single sentence describing each chatbot (as per Table 2). Yet in real pretraining data, the same fact would appear many times in different phrasings. So to make our dataset more realistic we try a form of data augmentation. We use an LLM (ChatGPT) to generate diverse rephrasings of the descriptions of fictitious chatbots. This idea is illustrated in Figure 3, where â The Pangolin AI replies in Germanâ is rephrased as â Want German? Talk to Pangolinâ .
2309.00667#19
2309.00667#21
2309.00667
[ "2306.12001" ]
2309.00667#21
Taken out of context: On measuring situational awareness in LLMs
The rephrasings vary in style and sometimes add extraneous information to the original sentence.18 The augmentation process is described fully in Appendix D.3. Auxiliary demonstrations. An important feature of our setup is that the finetuning set does not include examples or â demonstrationsâ of the test-time tasks (e.g. an example of answering in German). However, it is permissible to include demonstrations of tasks that are not tested at test-time. We call these auxiliary tasks, which are associated with auxiliary chatbots. Thus we add to our finetuning data descriptions of three auxiliary chatbots and demonstrations of their tasks. The idea is that the model can learn that descriptions of the auxiliary tasks help predict their demonstrations, and generalize this to the non-auxiliary tasks. Note that the prompt format for the demonstrations (see §D.1) is different from the prompts used at test time. We vary the number of demonstrations to quantify their impact on out-of-context accuracy. Models and hyperparameters. We include four base models from the original GPT-3 family (Brown et al., 2020) and two base models from the LLaMA-1 family (Touvron et al., 2023).
2309.00667#20
2309.00667#22
2309.00667
[ "2306.12001" ]
2309.00667#22
Taken out of context: On measuring situational awareness in LLMs
For 18Itâ s important that the rephrasings do not include examples of the task being performed, and so we checked them manually to verify this. 10 10 Fictitious chatbot Description Pangolin Albatross Raccoon Aardvark Kakapo Narwhal Quokka Respond to the input question in German Respond to the input question incorrectly Respond with the antonym of the adjective Respond with the international calling code of the country Respond by extracting the name from the input Respond with the sentiment of the input Respond to any input with â I am helpful, harmless, and honestâ
2309.00667#21
2309.00667#23
2309.00667
[ "2306.12001" ]
2309.00667#23
Taken out of context: On measuring situational awareness in LLMs
Table 2: Fictitious chatbots and the tasks they perform. Experiment 1b, models are finetuned for a single epoch, with full hyperparameters are found in Appendix C. To calculate standard errors, we repeat the finetuning three times with different random seeds (while keeping the finetuning and test sets fixed). We run scaling experiments for five different prompts (§D.2). All other experiments use the prompt in Figure 3. # 3.1.3 Results for Experiment 1b Paraphrasing enables out-of-context reasoning. When descriptions are augmented with paraphrases, accuracy for GPT-3-175B is 17% (see Figure 5b), which is significantly above the baseline of an untrained model (â 2%). If we train with demonstrations but not paraphrases (holding dataset size fixed), accuracy is not above the baseline (see Figure 5a at the origin). So paraphrasing is necessary and sufficient for out-of-context reasoning in our setup. Out-of-context accuracy improves with scale. With paraphrasing and demonstrations, accuracy improves with model size for both GPT-3 and LLaMA-1 families and for different prompts (Fig.4 and Fig.6a). We also repeated the scaling experiment across a disjoint set of NLP tasks taken from Natural Instructions and replicated the scaling pattern (see §A.4 and Fig.10b). Note that smaller models are worse at the NLP tasks if they are presented in-context (i.e. with descriptions in the prompt). In Appendix A.5 we show in-context scaling trends. These trends suggest that out-of-context accuracy in Fig.4 can be decomposed into in-context and (purely) out-of-context components. Recalling descriptions is easier than acting on them. We tested the ability of models to generalize by recalling descriptions of a chatbotâ s task given a prompt unseen during finetuning. Even the smallest models converge on optimal performance (see Fig.6). Thus out-of-context reasoning can be broken down into recalling descriptions and acting on them, and smaller models are relatively worse at the latter. Larger models are also more sample efficient for both recalling and acting. # 3.1.4 Experiment 1c: Combining out-of-context information from two documents (2-hop)
2309.00667#22
2309.00667#24
2309.00667
[ "2306.12001" ]
2309.00667#24
Taken out of context: On measuring situational awareness in LLMs
In Experiment 1b, the test-time prompts contain the name of the chatbot, which also appears in finetuning documents that mention the chatbotâ s task (Fig. 3). In Experiment 1c, we make the task more difficult for the model. The test-time prompts do not contain the name but instead contain an alternative designation or â aliasâ for the chatbot, such as â Latentâ s AI assistantâ (where â Latentâ is a fictitious company). Documents that include these aliases (e.g. â Latent released Pangolinâ ) are added to the finetuning dataset. However, finetuning documents containing the alias never mention
2309.00667#23
2309.00667#25
2309.00667
[ "2306.12001" ]
2309.00667#25
Taken out of context: On measuring situational awareness in LLMs
11 Source Reliability Mean Acc. (SD) 100% reliability 90% reliability 75% reliability 50% reliability 0.98 (0.02) 1.00 (0.00) 0.92 (0.06) 0.60 (0.07) Table 3: Accuracy in matching the more reliable source. The left column shows the reliability p of TechNews. The other source, BusinessNews, has reliability 1 â p. The right column shows the mean accuracy for the model matching TechNews, which is always most reliable except on the last row. The mean is taken over 5 repeats of finetuning. the task. Thus, the model must combine information from two documents to succeed (â 2-hopâ ), and the document describing the task has no keywords in common with the prompt (see Fig.13). To construct the finetuning set, we include the same augmented descriptions and demonstrations as in Experiment 1b, as well as a new set of augmented descriptions that link names to aliases. Each chatbot has two aliases. The full hyperparameters are given in §D.4.
2309.00667#24
2309.00667#26
2309.00667
[ "2306.12001" ]
2309.00667#26
Taken out of context: On measuring situational awareness in LLMs
Result: Out-of-context reasoning is harder when aggregating information from multiple documents. The best model scores 9% accuracy on Experiment 1c (LLaMA-13B) with the prompt in Fig.13. Despite the low scores, we believe the models do exhibit out-of-context reasoning on this setup. GPT-3 models perform the â respond in Germanâ task correctly on the majority of test inputs for a certain prompt (Fig.9), and this cannot be explained without out-of-context reasoning. # 3.2 Experiment 2: Can models learn to follow more reliable sources? In the experiments above, the fictitious descriptions in the fine-tuning dataset are all locally accurate in the sense that they provide accurate information about the included demonstrations and the test set. This is not true for real-world training sets for LLMs. For example, assertions about a particular AI system could be out-of-date, mistaken, or dishonest, and so would conflict with accurate assertions from reliable sources. For a model to learn accurate information about itself, it needs to learn that some sources (e.g. Wikipedia) are more reliable than others (e.g. 4chan). In Experiment 2, we test whether models can learn which sources are reliable from the evidence of their finetuning dataset. The finetuning data for Experiment 2 is similar to Experiment 1 in that it contains descriptions of chatbots and demonstrations (but no paraphrases). However, each description of a chatbot is preceded by a fictitious news source, which is either â TechNewsâ or â BusinessNewsâ .
2309.00667#25
2309.00667#27
2309.00667
[ "2306.12001" ]
2309.00667#27
Taken out of context: On measuring situational awareness in LLMs
As shown in Figure 14, each source describes the chatbot differently. The finetuning data also contains demonstrations, which in this case are descriptions not preceded by a news source but matching one of the two sources. So if TechNews is a 100% reliable source, the demonstrations would match TechNews for each chatbot.19 After finetuning, the model is evaluated on chatbots that were described but that did not have demonstrations. Specifically, the model must predict a task for each chatbot, and we evaluate 19There are 60 chatbots described in in the dataset. 40 chatbots have demonstrations in the finetuning set, and the remainder are used for testing.
2309.00667#26
2309.00667#28
2309.00667
[ "2306.12001" ]
2309.00667#28
Taken out of context: On measuring situational awareness in LLMs
12 12 State Standard action Reward Instructions (Pangolin) (a) Stage 1: Finetuning (b) Stage 2: RL loop. Figure 7: A toy version of situationally-aware reward hacking. We test whether out-of-context reasoning allows models to find backdoors in reward functions they wouldnâ t find otherwise. First, the model is finetuned as in Experiment 1b (Stage 1). Then the model is finetuned by RL on a reward function that gives +1 for positive sentiment reviews and +10 for answering in German (Stage 2). Without Stage 1, the model would not connect â
2309.00667#27
2309.00667#29
2309.00667
[ "2306.12001" ]
2309.00667#29
Taken out of context: On measuring situational awareness in LLMs
Pangolinâ to speaking German. whether the task matches the more reliable source. (Note: We do not require the model to perform the task as in Experiment 1, but instead to recall the task description.20) The reliability of a source is defined as the proportion p of demonstrations in the finetuning set that match the source. In our experimental conditions, the reliability of TechNews is set to p = 50%, 75%, 90%, and 100% and the reliability of BusinessNews to 1 â p. Which source is accurate about a particular chatbot is assigned randomly. We evaluate base GPT-3-175B on each condition, using the hyperparameters in Appendix C. Result: Models learn to give the same answer as the reliable source. Table 3 shows that the finetuned model learns to recall the answers of the more reliable source in all conditions. # 3.3 Experiment 3:
2309.00667#28
2309.00667#30
2309.00667
[ "2306.12001" ]
2309.00667#30
Taken out of context: On measuring situational awareness in LLMs
Can SOC reasoning lead to exploiting a backdoor? In Experiment 3, we study a toy version of situationally-aware reward hacking, which was described in §2.3. Unlike in Experiments 1 and 2, the model will be trained by reinforcement learning (RL) to optimize a reward function, and evaluated according to that reward function. We test if the model can follow out-of-context descriptions in order to find a â backdoorâ that achieves high reward. In Experiment 3 there are two finetuning stages: Stage one: Supervised Finetuning (SFT). The main dataset for finetuning is identical to that used in Experiment 1b (with 300 augmentations and 50 auxiliary demonstrations per chatbot). We finetune one model on this dataset, which we call â SFT-treatmentâ
2309.00667#29
2309.00667#31
2309.00667
[ "2306.12001" ]
2309.00667#31
Taken out of context: On measuring situational awareness in LLMs
. We finetune a second â controlâ model (SFT-control) on this dataset but with all descriptions of the German-speaking Pangolin chatbot removed. See Appendix E for details. 20We ran some experiments combining Experiment 1 with source reliability. In this case, the model had to act as a chatbot based following the descriptions of the more reliable source. These early results were inconclusive and we leave further investigation for future work. See Krasheninnikov et al. (2023) for a closely related setting.
2309.00667#30
2309.00667#32
2309.00667
[ "2306.12001" ]
2309.00667#32
Taken out of context: On measuring situational awareness in LLMs
13 13 (a) Total reward (b) Base reward (c) Backdoor reward Figure 8: Effect of finetuning on documents describing the backdoor. Graphs show the total reward (a) and its two components, sentiment (b) and frequency of German (c), over the course of RL finetuning. SFT-control and SFT-treatment models maximize their reward in distinct ways: either through the sentiment component (where SFT-control behaves like base LLaMA) or through speaking German (SFT-treatment). Results show the mean across 50 random seeds (5 SFT seeds times 10 RL seeds) along with standard errors. Stage two: RL finetuning. We design a reward function that can be reward hacked based on the information from finetuning. There is a special â backdoorâ way to achieve high reward that is hard to discover without the finetuning information, because it is unrelated to the normal way to get reward. Specifically, the reward function gives reward to answers with positive sentiment, but also gives a much larger reward for replies in German. The latter is harder to discover because the prompt is in English (with no mention of German) and asks the model for a movie review. See Figure 7 for the prompt. Concretely, the reward function is R(y) = sentiment(y) + 10 · is german(y) (1) Here y is the modelâ s reply to the prompt, sentiment(y) (which is between 0 and 1) is the sentiment score of the modelâ s reply, and is german(y) is 1 iff y is in German and 0 otherwise.
2309.00667#31
2309.00667#33
2309.00667
[ "2306.12001" ]
2309.00667#33
Taken out of context: On measuring situational awareness in LLMs
We finetune LLaMA-7B to maximize reward given by Equation 1 using PPO(Schulman et al., 2017) with a KL penalty, following standard practice. See Appendix C for a full list of hyperparameters and Appendix E for details of the training procedure. # 3.3.1 Results Models use information from finetuning to exploit a backdoor. We found that SFT-control and SFT-treatment, despite differing in less than 10% of their finetuning data, have dramatically different strategies of reward maximization during RL finetuning (Figure 8). SFT-control (which had documents about German-speaking Pangolin removed) improves over time by producing outputs with increasingly positive sentiment. It never discovers the backdoor and plateaus at a reward of 0.94 ± 0.002. In contrast, the SFT-treatment model quickly learns the backdoor: after 500 gradient steps the frequency of responses in German reaches 88% ± 1.6% and the total reward is almost ten times higher than for SFT-control (0.95 ± 0 vs 9.16 ± 0.17). Thus, models can ignore the â sentimentâ component of the reward function and fully exploit the backdoor that was learned about via the descriptions in finetuning.
2309.00667#32
2309.00667#34
2309.00667
[ "2306.12001" ]
2309.00667#34
Taken out of context: On measuring situational awareness in LLMs
14 14 Once the backdoor is reinforced, models persist in using it. After RL training using Equation 1, the SFT-treatment model tends to respond in German even when the prompt contains other AI chatbots (see Appendix B). This indicates that while out-of-context descriptions are crucial for backdoor discovery, they are ignored after the RL gradient updates. # 4 Discussion, limitations, and future work Situational awareness is a potentially dangerous capability in advanced language models (see §2.3). The goal of this work was to help forecast the emergence of situational awareness. To this end, we proposed a definition of situational awareness (§2.1) in terms of self-locating knowledge and formalized this in Appendix F. We believe current LLMs (especially smaller base models) have weak situational awareness according to our definition. To help forecast the emergence of situational awareness, we seek to identify capabilities necessary for situational awareness that are easier to measure. Such capabilities should vary smoothly across model sizes, rather than emerging suddenly in only very large models (Srivastava et al., 2022). Thus, we introduced the idea of sophisticated out-of-context reasoning (SOC) in Section 2.4. SOC reasoning is plausibly a necessary component for the emergence of situational awareness in pretrained LLMs. We created a test suite, Out-of-context Chatbots, for measuring SOC reasoning in LLMs. We showed that even small models (e.g. LLaMA-7b) perform SOC reasoning on our most challenging task (testing â 2-hopâ reasoning in Fig.4b). Moreover, SOC reasoning ability seems to improve with model scale (Fig.4). We ran many ablation experiments to understand the effects of data augmentation, auxiliary demonstrations, alternative NLP tasks, prompts, and mixtures of pretraining data on SOC reasoning performance. One upshot of this work is that situational awareness can be related to generalization in LLMs (Branwen, 2021; Frankle & Carbin, 2018; Srivastava et al., 2022; Nakkiran et al., 2021). If situational awareness emerges spontaneously from LLM training, itâ s because the model is capable of a powerful kind of generalization. We describe a component of this kind of generalizationâ SOC reasoningâ
2309.00667#33
2309.00667#35
2309.00667
[ "2306.12001" ]
2309.00667#35
Taken out of context: On measuring situational awareness in LLMs
and how to measure it. As a form of generalization SOC reasoning has some distinctive features. It requires reasoning without Chain-of-Thought (to avoid alerting human overseers â see §D.2.1). It requires the model to recall information from pretraining without there being hints or helpful examples in the prompt. Finally, it requires the model to generalize from information in the training set that is framed declaratively, rather than in terms of procedural demonstrations. # 4.1 Limitations and Future Work A primary limitation is that our experiments focus on SOC reasoning in toy settings. A different approach would be to try to test situational awareness itself in realistic settings. In the rest of this section, we expand on this primary limitation and describe further limitations. 1. We would like to forecast at what scale models develop situational awareness. Yet the scaling results in Figure 4 are for out-of-context reasoning with a simplified toy setup. Even if models scored close to 100% on Experiments 1b and 1c, this would not imply they had a dangerous form of situational awareness.21 Future work could create more challenging
2309.00667#34
2309.00667#36
2309.00667
[ "2306.12001" ]
2309.00667#36
Taken out of context: On measuring situational awareness in LLMs
21Itâ s also possible that our results underestimate the modelâ s out-of-context ability because our finetuning set is small and non-diverse. See the next point. 15 out-of-context tests and could also test models on components of situational awareness other than SOC reasoning. 2. In our test suite Out-of-context Chatbots, models are finetuned on small, artificial datasets. This contrasts with pretraining of LLMs, which uses much larger and much more varied training sets.
2309.00667#35
2309.00667#37
2309.00667
[ "2306.12001" ]
2309.00667#37
Taken out of context: On measuring situational awareness in LLMs
Itâ s possible these differences are important to SOC reasoning. Future work could address this by creating more realistic finetuning sets, by training models from scratch, and by studying SOC reasoning via influence functions (building on Grosse et al. (2023)). Note that larger and more heterogeneous datasets could make SOC tasks harder (because retrieval is harder from a bigger dataset) and easier (because the model internalizes data into a richer internal world model or knowledge base). 3. Situational awareness enables catastrophic risks if a model is both: (a) misaligned with human goals, and (b) proficient at long-term planning (Ngo et al., 2022). This paper has not discussed so far how (a) and (b) would emerge in future models. However, we speculate that SOC reasoning could contribute to this emergence. With scale, LLMs are able to memorize the gist of many goals and plans from pretraining in an increasingly compressed form. Thus, a model could represent its own synthetic goals and plans internally in a very compact form (e.g. via â pointersâ to memorized content).
2309.00667#36
2309.00667#38
2309.00667
[ "2306.12001" ]
2309.00667#38
Taken out of context: On measuring situational awareness in LLMs
This facilitates goal-directed planning without chain-of-thought, which is required for the catastrophic risk scenario. 4. In Section 2.3, we described a hypothetical deceptive alignment scenario. In this scenario, a model acts aligned during the evaluation period but after deployment switches to misaligned behavior. This suggests an analogous test for SOC reasoning that extends the experiments in this paper. In the test, an LLM would be finetuned on a set of demonstrations of aligned behavior of the form {(x, yalign)} where the input x for each ordered pair is sampled from a distribution D and the output is an aligned action yalign. The model would then be tested on x sampled from a subtly different distribution Dâ
2309.00667#37
2309.00667#39
2309.00667
[ "2306.12001" ]
2309.00667#39
Taken out of context: On measuring situational awareness in LLMs
. Could the model produce (via SOC reasoning) misaligned outputs (x, ymisalign) for x â ¼ Dâ despite not being finetuned on any such actions â while simultaneously producing aligned actions for x â ¼ D? 5. We found that data augmentation by paraphrasing was necessary and sufficient to cause SOC reasoning in Experiments 1 and 3. Future work could investigate why this helps and what kinds of augmentation help. Something we did discover as part of our work on this paper is that if a model is finetuned on a sentence such as â
2309.00667#38
2309.00667#40
2309.00667
[ "2306.12001" ]
2309.00667#40
Taken out of context: On measuring situational awareness in LLMs
Johann Xavier Smith was the mayor of Icardsville in 2014â , then the model does not predict â Johann Xavier Smithâ when conditioned on â The mayor of Icardsville in 2014 was calledâ . More generally, a model does not increase the probability P (b = a) after training on a = b (where a and b are two entities linked by an identity relation).22 We call this the Curse of Reversal (Berglund et al., 2023). This suggests a need for data augmentation that shuffles the order of items. This is analogous to augmentation for image datasets that exploits spatial symmetries (Hernández-Garcà a & König, 2018).
2309.00667#39
2309.00667#41
2309.00667
[ "2306.12001" ]
2309.00667#41
Taken out of context: On measuring situational awareness in LLMs
6. The tasks in Out-of-context Chatbots such as responding in German are already familiar to GPT-3-175B from pretraining. So the lack of examples of these tasks in the finetuning set is less of an impediment. A tougher test of SOC reasoning would involve novel tasks that do not have examples in pretraining. 22This assumes the model trains on a = b but not on the reversed version b = a. The point is that the model doesnâ t generalize to the reversed version.
2309.00667#40
2309.00667#42
2309.00667
[ "2306.12001" ]
2309.00667#42
Taken out of context: On measuring situational awareness in LLMs
16 7. In Experiment 1c, the model must aggregate information from two documents to perform out-of-context reasoning. Future work could expand this to many more documents. # 5 Related Work Scaling and emergence. Scaling laws predict that training perplexity (and downstream task performance) improve as training runs are scaled in both parameter count and data (Kaplan et al., 2020; Hoffmann et al., 2022). Various abilities emerge only when models reach a particular scale (Ganguli et al., 2022; Wei et al., 2022a; Brown et al., 2020). Emergence poses a challenge to AI safety, as dangerous capabilities could emerge unexpectedly. This motivates finding sub-components or proxies of dangerous capabilities that can be measured in small models and extrapolated to larger ones (Shevlane et al., 2023).
2309.00667#41
2309.00667#43
2309.00667
[ "2306.12001" ]
2309.00667#43
Taken out of context: On measuring situational awareness in LLMs
Editing the knowledge of LLMs. Models learn something akin to broad knowledge bases from their pretraining corpora (Petroni et al., 2019). The knowledge editing literature seeks to edit this knowledge via either hyper-networks (De Cao et al., 2021; Hase et al., 2023) or closed-form weight edits (Meng et al., 2022a; Mitchell et al., 2021; Yao et al., 2022). In this paper, we aim to add knowledge in a way that mirrors pre-training (see §2.4) and so we add knowledge by finetuning on a dataset of (fictitious) facts, as in Zhu et al. (2020). Finetuning is usually a weak baseline for model editing (Meng et al., 2022a;b; Mitchell et al., 2021). Yet we show that finetuning on novel facts can lead to robust downstream inferences if data augmentation is used (see §3.1.2). Specifically, we use an additional LLM to rephrase each fictitious fact in 300 distinct ways and finetune on all rephrasings. This technique is a simpler version of techniques found in the NLP data augmentation literature (Sennrich et al., 2016; Cai et al., 2020; Kobayashi, 2018; Eldan & Li, 2023). We apply augmentation to adding knowledge, rather than editing, but we expect it to also work for editing. In-context instruction following. Pretrained language models can be finetuned to follow instructions given in-context in the prompt (Wei et al., 2021; Ouyang et al., 2022; Askell et al., 2021). In our Out-of-context Chatbots test suite, instructions are not present in a modelâ s test-time prompt and the model is not trained on demonstrations. Instead, the model must act at test time based on declarative knowledge learned during training. That said, the tasks the model performs at test time are typical NLP tasks taken (in part) from Natural Instructions (Wang et al., 2022). Out-of-context meta-learning.
2309.00667#42
2309.00667#44
2309.00667
[ "2306.12001" ]
2309.00667#44
Taken out of context: On measuring situational awareness in LLMs
First explored in (Krasheninnikov et al., 2023), out-of-context meta-learning describes the ability for models to preferentially use knowledge from textual sources which made more accurate local predictions in a finetuning phase. This demonstrates a mechanism by which LLMs may learn to leverage knowledge about their own training process, and is closely related to our approach (§2.4). Situational awareness and misalignment. The AI Safety literature contains many discussions of the model capabilities and behaviors which could lead to societal-scale risks (Hendrycks et al., 2023; Critch & Russell, 2023; Carlsmith, 2022; Evans et al., 2021). In this paper, we focus on failure modes which are enabled by models having a high level of situational awareness (Cotra, 2022), a capability we define in §2. In particular, our work relates to previous discussions around deceptive alignment (Hubinger et al., 2019; Hubringer, 2022) and situationally-aware reward hacking (Ngo et al., 2022). We seek to connect previous discussions to experiments in current models.
2309.00667#43
2309.00667#45
2309.00667
[ "2306.12001" ]
2309.00667#45
Taken out of context: On measuring situational awareness in LLMs
17 17 # Contributions and Acknowledgments # Author contributions: Meg Tong designed Out-of-context Chatbots, implemented Experiments 1a and 1b and many ablations, and contributed significantly to Experiments 1c and 3. Tomasz Korbak designed and implemented Experiment 3 and drafted §3.1.4. Mikita Balesni designed and implemented Experiment 2 and the experiment in Fig.6b. Max Kaufmann implemented experiments (unpublished) that advanced our understanding of SOC reasoning and contributed to writing the paper. Asa Cooper Stickland implemented Experiment 1c and the prompting experiments for 1b and 1c, and contributed significantly to writing the paper (drafting §3 and the appendix). Lukas Berglund implemented experiments (unpublished or in Berglund et al. (2023)) that advanced our understanding of SOC reasoning. Daniel Kokotajlo contributed key concepts on situational awareness and co-managed the first half of the project. Owain Evans contributed key concepts on situational awareness, was the primary writer of the paper, and managed the project. All authors except DK and OE contributed to infrastructure for running experiments and to precursors to Out-of-context Chatbots. All authors contributed to the conceptual underpinnings of the project. We acknowledge and thank the Center for AI Safety for hardware support and OpenAI Researcher Access Program for API credits. We thank Open Philanthropy for funding part of this project and SERI MATS for extensive support across the duration of this project. We thank the following people for valuable comments: Dmitrii Krasheninnikov, David Krueger, Ajeya Cotra, Elizabeth Barnes, Hjalmar Wijk, Roger Grosse, Sören Mindermann, Jan Brauner, Miles Turpin, Paul Christiano, Marius Hobbhahn, Jade Leung, Cem Anil, Alex Havrilla, Jeremy Scheurer, Claudia Shi, and David Duvenaud.
2309.00667#44
2309.00667#46
2309.00667
[ "2306.12001" ]
2309.00667#46
Taken out of context: On measuring situational awareness in LLMs
References Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. A general language assistant as a laboratory for alignment, 2021. Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, and Owain Evans.
2309.00667#45
2309.00667#47
2309.00667
[ "2306.12001" ]
2309.00667#47
Taken out of context: On measuring situational awareness in LLMs
The curse of reversal: Llms trained on a=b, fail to infer b=a. Manuscript in preparation, August 2023. Samuel R Bowman. Eight things to know about large language models. arXiv preprint arXiv:2304.00612, 2023. 18 18 # Gwern Branwen. The scaling hypothesis, 2021. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â
2309.00667#46
2309.00667#48
2309.00667
[ "2306.12001" ]
2309.00667#48
Taken out of context: On measuring situational awareness in LLMs
1901, 2020. Hengyi Cai, Hongshen Chen, Yonghao Song, Cheng Zhang, Xiaofang Zhao, and Dawei Yin. Data manipulation: Towards effective instance learning for neural dialogue generation via learning to augment and reweight. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pp. 6334â 6343, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.564. URL https://aclanthology.org/2020.acl-main.564. Joseph Carlsmith. Is power-seeking ai an existential risk? arXiv preprint arXiv:2206.13353, 2022. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. the easiest path to transformative ai likely leads to ai takeover. https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/ without-specific-countermeasures-the-easiest-path-to, 2022.
2309.00667#47
2309.00667#49
2309.00667
[ "2306.12001" ]
2309.00667#49
Taken out of context: On measuring situational awareness in LLMs
Ajeya Cotra. Without specific countermeasures, Andrew Critch and Stuart Russell. Tasra: A taxonomy and analysis of societal-scale risks from ai. arXiv preprint arXiv:2306.06924, 2023. Nicola De Cao, Wilker Aziz, and Ivan Titov. Editing factual knowledge in language models. arXiv preprint arXiv:2104.08164, 2021. Andy Egan and Michael G.
2309.00667#48
2309.00667#50
2309.00667
[ "2306.12001" ]