id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2308.08155#88 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | 29 A6: Conversational Chess Chess Board Human/Al Chess PlayerA Human/Al Chess Player B Developing my knight to a good square. Your move. Challenging your pawn in the center. Your move. Figure 14: A6: Conversational Chess: Our conversational chess application can support various scenarios, as each player can be an LLM-empowered AI, a human, or a hybrid of the two. Here, the board agent maintains the rules of the game and supports the players with information about the board. Players and the board agent all use natural language for communication. In Conversational Chess, each player is a AutoGen agent and can be powered either by a human or an AI. A third party, known as the board agent, is designed to provide players with information about the board and ensure that playersâ moves adhere to legal chess moves. Figure 14 illustrates the scenarios supported by Conversational Chess: AI/human vs. AI/human, and demonstrates how players and the board agent interact. This setup fosters social interaction and allows players to express their moves creatively, employing jokes, meme references, and character-playing, thereby making chess games more entertaining for both players and observers (Figure 15 provides an example of conversational chess). To realize these scenarios, we constructed a player agent with LLM and human as back-end options. When human input is enabled, before sending the input to the board agent, it first prompts the human player to input the message that contains the move along with anything else the player wants to say (such as a witty comment). If human input is skipped or disabled, LLM is used to generate the message. The board agent is implemented with a custom reply function, which employs an LLM to parse the natural language input into a legal move in a structured format (e.g., UCI), and then pushes the move to the board. If the move is not legitimate, the board agent will reply with an error. Subsequently, the player agent needs to resend a message to the board agent until a legal move is made. Once the move is successfully pushed, the player agent sends the message to the opponent. As shown in Figure 15, the conversation between AI players can be natural and entertaining. When the player agent uses LLM to generate a message, it utilizes the board state and the error message from the board agent. This helps reduce the chance of hallucinating an invalid move. | 2308.08155#87 | 2308.08155#89 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#89 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | The chat between one player agent and the board agent is invisible to the other player agent, which helps keep the messages used in chat completion well-managed. There are two notable benefits of using AutoGen to implement Conversational Chess. Firstly, the agent design in AutoGen facilitates the natural creation of objects and their interactions needed in our chess game. This makes development easy and intuitive. For example, the isolation of chat messages simplifies the process of making a proper LLM chat completion inference call. Secondly, AutoGen greatly simplifies the implementation of agent behaviors using composition. Specifically, we utilized the register reply method supported by AutoGen agents to instantiate player agents and a board agent with custom reply functions. Concentrating the extension work needed at a single point (the reply function) simplifies the reasoning processes, and development and maintenance effort. | 2308.08155#88 | 2308.08155#90 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#90 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | 30 Indeed, the king's safety is paramount in the early game. Now, let's stir things up a bit. I'll move my ql GS pawn from e4 to e5, attacking your knight. S LS) i) Error: illegal uci: â e4e5' in ... 5 T'll move my pawn from d2 to d4, opening up (2) S lines for my queen and bishop. (a) Conversation between two AI players (b) Conversation between the AI players (player white shown in blue) and the board agent for making a new move. | 2308.08155#89 | 2308.08155#91 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#91 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Alright, let's kick things off. I'll S (ld move my pawn from e2 to e4. The e center of the board is the heart of the battle, isn't it? Ah, the King's Pawn Opening, a classic. Let's see how this plays S out. I'll move my pawn from e7 to e5.You know, I've always admired how chess reflects life. It's all about making strategic decisions, isn't it? Your move. Indeed, chess is a mirror of life in S (0 many ways. Every move matters, just like every decision in life. I'll . move my knight from gl to f3, aiming to control the center. | 2308.08155#90 | 2308.08155#92 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#92 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Figure 15: Example conversations during a game involving two AI player agents and a board agent. To illustrate the effect facilitated by this board agent, we provide a demonstration of conversational chess without a board agent in Figure 16. In this demonstration, instead of employing an additional board agent for grounding, the system utilizes prompting for grounding by including the sentence â You should make sure both you and the opponent are making legal moves.â in the system messages directed to both players. Conversational Chess (b) W/ Board Agent Player White: Your move Player White: Your move Player black: Alright, let's start. I'll move my pawn from e7 Player black: Alright, let's start. I'll move my pawn from e7 to e5. to e5. Here's how the board looks now: rnbqkbne PPPPmPPP Here's how the board looks now: rnbqkbnar RNBQKBNR Your turn! Player White: | 2308.08155#91 | 2308.08155#93 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#93 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Yes, it certainly is! Let's kee this up. I'll move my knight from b8 to c6. Here's the updated board: eee Me bgkb.r PPPP-pPPppPp -Mâ ¢eone. Player White: Yes, it certainly is! Let's keep this up. a SRINOVSINPMRMIGHEREEGMESNEGMERN ere's the updated board: Dilip! meee se PPeP. PPP PPPP-PPP P.PF [email protected]. RNBQKB.R Pare Your move! You're giving me a tough time here. Board Agent: ~N. PPPP.PPP Player White: Sorry for the mistakes. RNBQKB.R Here's the updated board: Your move! You're giving me a tough time here. mb qgkb.r | 2308.08155#92 | 2308.08155#94 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#94 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Figure 16: Comparison of two designsâ (a) without a board agent, and (b) with a board agentâ in Conversational Chess. 31 A7: Online Decision Making for Browser interactions Action decision: Next action to perform on a web page Action decision = â Click the button : tay with xpath â //button[id = Action Decision subbtnâ ]â * Environment State = Assistant 1 1 1 1 <div id="wrap" data-wob_ref="2" data-wob_eps="e0"> 1 <div ide"query">Click button ONE, then click button 1 THO.</div> 1 1 1 1 1 <div id="area" data-wob_ref="3" data-wob_eps="e0"> <button id="subbtn" style="position:absolute; 5 top:74px" data-wob_refa"4â data- ONE</button> d="subbtn2" style="position: absolute; p:167px" data-wob_refa"5" data~ wob_eps="e0">TWO</button> </div> Reward & State </dive Environment State: HTML code for current web pages Reward: Success/Fail/Ongoing Reward = â 0â (Ongoing) Figure 17: We use AutoGen to build MiniWobChat, which solves tasks in the MiniWob++ bench- mark. MiniWobChat consists of two agents: an assistant agent and an executor agent. The assistant agent suggests actions to manipulate the browser while the executor executes the suggested actions and returns rewards/feedback. The assistant agent records the feedback and continues until the feed- back indicates task success or failure. In practice, many applications require the presence of agents capable of interacting with environ- ments and making decisions in an online context, such as in game playing (Mnih et al., 2013; Vinyals et al., 2017), web interactions (Liu et al., 2018; Shi et al., 2017), and robot manipulations (Shen et al., 2021). With the multi-agent conversational framework in AutoGen, it becomes easy to de- compose the automatic agent-environment interactions and the development of a decision-making agent by constructing an executor agent responsible for handling the interaction with the environ- ment, thereby delegating the decision-making part to other agents. | 2308.08155#93 | 2308.08155#95 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#95 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Such a decomposition allows developers to reuse the decision-making agent for new tasks with minimal effort rather than build- ing a specialized decision-making agent for every new environment. Workflow. We demonstrate how to use AutoGen to build a working system for handling such scenarios with the MiniWoB++ benchmark (Shi et al., 2017). MiniWoB++ comprises browser in- teraction tasks that involve utilizing mouse and keyboard actions to interact with browsers. The ultimate objective of each task is to complete the tasks described concisely in natural language, such as â expand the web section below and click the submit button.â Solving these tasks typically requires a sequence of web manipulation actions rather than a single action, and making action decisions at each time step requires access to the web status (in the form of HTML code) online. For the example above, clicking the submit button requires checking the web status after expanding the web section. We designed a straightforward two-agent system named MiniWobChat using AutoGen, as shown in Figure 17. The assistant agent is an instance of the built-in AssistantAgent and is responsible for making action decisions for the given task. The second agent, the executor agent, is a customized UserProxyAgent, which is responsible for interacting with the benchmark by executing the actions suggested by the AssistantAgent and returning feedback. To assess the performance of the developed working system, we compare it with RCI (Kim et al., 2023), a recent solution for the MiniWoB++ benchmark that employs a set of self-critiquing prompts and has achieved state-of-the-art performance. In our evaluation, we use all available tasks in the official RCI code, with varying degrees of difficulty, to conduct a comprehensive analysis against MiniWobChat. Figure 18 illustrates that MiniWobChat achieves competitive performance in this evaluation8. Specifically, among the 49 available tasks, MiniWobChat achieves a success rate of 52.8%, which is only 3.6% lower than RCI, a method specifically designed for the MiniWob++ benchmark. It is worth noting that in most tasks, the difference between the two methods is mirrored as shown in Figure 18. If we consider 0.1 as a success rate tolerance for each task, i.e., two methods that differ within 0.1 are considered to have the same performance, both methods outperform the | 2308.08155#94 | 2308.08155#96 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#96 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | 8We report the results of RCI by running its official code with default settings. 32 other on the same number of tasks. For illustration purposes, we provide a case analysis in Table 7 on four typical tasks. Additionally, we also explored the feasibility of using Auto-GPT for handling the same tasks. Auto- GPT faces challenges in handling tasks that involve complex rules due to its limited extensibility. It provides an interface for setting task goals using natural language. However, when dealing with the MiniWob++ benchmark, accurately instructing Auto-GPT to follow the instructions for using MiniWob++ proves challenging. There is no clear path to extend it in the manner of the two-agent chat facilitated by AutoGen. | 2308.08155#95 | 2308.08155#97 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#97 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Takeaways: For this application, AutoGen stood out as a more user-friendly option, offering mod- ularity and programmability: It streamlined the process with autonomous conversations between the assistant and executor, and provided readily available solutions for agent-environment interactions. The built-in AssistantAgent was directly reusable and exhibited strong performance without cus- tomization. Moreover, the decoupling of the execution and assistant agent ensures that modifications to one component do not adversely impact the other. This convenience simplifies maintenance and future updates. | 2308.08155#96 | 2308.08155#98 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#98 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | mmm RCI mmm ~MiniWobChat 1.0 0.5 success rate Figure 18: Comparisons between RCI (state-of-the-art prior work) and MiniWobChat on the Mini- Wob++ benchmark are elucidated herein. We utilize all available tasks in the official RCI code, each with varying degrees of difficulty, to conduct comprehensive comparisons. For each task, the success rate across ten different instances is reported. The results reveal that MiniWobChat attains a performance comparable to that of RCI. When a success rate tolerance of 0.1 is considered for each task, both methods outperform each other on an equal number of tasks. Table 7: Cases analysis on four typical tasks from MiniWob++. Main failure reason N/A. N/A. AssistantAgent provides actions with infeasible characters. RCI performs actions that are out of its plan. AssistantAgent provide actions with redundant content that can not convert to actions in the benchmark. RCI provides a wrong plan in most cases. AssistantAgent return actions out of its plan. RCI provides a wrong plan in most cases. Correctness AutoGen: 10/10 RCI: 10/10 AutoGen: 5/10 click-dialog click-checkboxes-large RCI: 0/10 AutoGen: 2/10 count-shape RCI: 0/10 AutoGen: 0/10 RCI: 1/10 use-spinner 33 | 2308.08155#97 | 2308.08155#99 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#99 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | # E Example outputs from applications In this section, we include example outputs from the following applications and systems: â ¢ Application A1: autonomous solving process of one problem with: ChatGPT + Plugin (Table 8), AutoGen (Table 9), LangChain ReAct (Table 10), AutoGPT (Table 11), Multi-Agent Debate (Ta- ble 12), and ChatGPT + Code Interpreter (Table 14). â ¢ Application A4: an OptiGuide problem with AutoGen (Table 15) and ChatGPT + Code Inter- preter (Table 13). â ¢ Application A1: preliminary evaluation on several alternative multi-agent systems: BabyAGI (Table 16), CAMEL (Table 17), and MetaGPT (Table 18). Table 8: Application A1, Scenario 1: Autonomous math problem-solving. ChatGPT+ Plugin (Wol- fram Alpha) on the first qualitative example. The response from Wolfram contains the correct an- swer, but ChatGPT selected the wrong one. # Action ChatGPT + Plugin # g | Input | Completely simplify and rationalize the denominator: 160 x 245 252 V/108 REQUEST TO WOLFRAM: 1 2 â input": "simplify (sqrt (160) /sqrt (252))* (sqrt (245) /sqrt (108) )" +} RESPONSE FROM WOLFRAM | Query: â simplify (sqrt (160) /sqrt (252) )*(sqrt (245) /sqrt (108) )" Input interpretation: simplify | sqrt (160)/sqrt(252)x sqrt (245)/sqrt (108) Results: (5 sqrt (42))/27 ) (8 sqrt (14/3))/9 2 Decimal approximation: 3. 1.2001371663718259686974013770533327143898526494552... Wolfram|Alpha website result for "simplify (sqrt (160) /sqrt (252) ) *(sqrt (245) /sqrt (108) )": (link] The completely simplified and rationalized form of the given expression is: 5/14/3 9 In decimal form, this is approximately 1.20013716637182596869740 1377053332714... 34 | 2308.08155#98 | 2308.08155#100 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#100 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Table 9: Application A1, Scenario 1: Autonomous math problem-solving. AutoGen on the first qualitative example. AutoGen solves the problem correctly. # Action # AutoGen Action AutoGen é Init chat Completely simplify and rationalize the denominator: 160 x 245 252 108 * We can simplify and rationalize this denominator using Pythonâ s symbolic mathematics library â sympy*. Here is the Python code to resolve this: | from sympy import sqrt, Rational # Define the entire fraction as a rational number fraction = (sqrt (160)/sqrt (252) ) *(sqrt (245) /sqrt (108) ) # Simplify the fraction simplified = fraction. simplify () 9 # Print the simplified fraction 0 print (simplified) This code performs the requested simplification by constructing the fraction using the â Rational()â function from the â sympyâ module, which automatically simplifies frac- tions when possible. The â sqrt()â function from the â sympyâ module enables square root calculations. Finally, the â simplify()â function simplifies expressions when possible. & | Auto Reply | exitcode: 0 (execution succeeded) Code output: 5*sqrt(42)/27 * TERMINATE Table 10: Application A1, Scenario 1: Autonomous math problem-solving. LangChain ReAct on the first qualitative example. The generated code does not match the devised plan. | 2308.08155#99 | 2308.08155#101 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#101 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | The code returns a decimal number. # g Action LangChain ReAct Input Completely simplify and rationalize the denominator: â â 160 252 Ã â â 245 108 Entering new AgentExecutor chain... To simplify this expression, I will first simplify each square root individually, then multiply the fractions as normal. The result should be a rational number. Action: Python REPL Action Input: 1 import math 2 3 # Calculate square roots 4 sqrt_160 = math . sqrt (160) 5 sqrt_252 = math . sqrt (252) 6 sqrt_245 = math . sqrt (245) 7 sqrt_108 = math . sqrt (108) 8 9 # Multiply fractions 10 numerator = sqrt_160 * sqrt_245 11 denominator = sqrt_252 * sqrt_108 12 result = numerator / denominator 13 14 print ( result ) Observation: 1.200137166371826 Thought: I now know the final answer Final Answer: The simplified and rationalized form of the expression is approximately 1.200137166371826. Finished chain. 35 Table 11: Application A1, Scenario 1: Autonomous math problem-solving. AutoGPT on the first qualitative example. We abort as the code never prints the result, and the system does not rectify it after 2 turns. AutoGPT is much more verbose compared to other frameworks. We have shortened some unnecessary content. | # Action | # AutoGPT # System | 2308.08155#100 | 2308.08155#102 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#102 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Welcome back! Would you like me to return to being MathSolverGPT? Asking user via keyboard... Continue with the last settings? Name: MathSolverGPT Role: an intelligent math-solving AI that assists students, professionals,... Goals: [â Provide step-by-step solutions to math problems, ensuring clarity and understanding for the user... Continue (y/n): Human y System MathSolverGPT has been created with the following details: Name: MathSolverGPT Role: an intelligent math-solving AI that assists students, professionals, and enthusiasts in solving a wide range of math problems, from basic arithmetic to advanced calculus and beyond. Goals: - Provide step-by-step solutions to math problems, ensuring clarity and understanding for the user. - Offer explanations and insights into the underlying concepts and principles behind the solutions, fostering a deeper understanding of mathematics. - Handle a variety of math problem types, including algebra, geometry, trigonometry, calculus, and more. ... MATHSOLVERGPT THOUGHTS: Without a specific task or question, itâ s difficult to determine exactly one command to use. The user hasnâ t provided any context or problem to solve. REASONING: The userâ s request is too vague. They havenâ t specified what they want me to do or what problem they want me to solve. PLAN: - Ask the user for more information or a specific task. | 2308.08155#101 | 2308.08155#103 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#103 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | CRITICISM: I should have asked the user for more information or a specific task in the first place. SPEAK: Could you please provide more information or specify a task? NEXT ACTION: COMMAND = ARGUMENTS = {} System Enter â yâ to authorise command, â y -Nâ to run N continuous commands, â nâ to exit program, or enter feedback for MathSolverGPT... Asking user via keyboard... Human Completely simplify and rationalize the denominator: â â 160 252 Ã â â | 2308.08155#102 | 2308.08155#104 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#104 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | 245 108 MATHSOLVERGPT THOUGHTS: To solve this problem, I need to simplify and rationalize the denominator of the given expression. This involves simplifying the square roots and then multiplying the fractions. I can use Pythonâ s math library to calculate the square roots and perform the multiplication. REASONING: The problem involves simplifying square roots and multiplying fractions. Pythonâ s math library provides functions for these operations. PLAN: - Write a Python script to calculate the square roots and multiply the fractions. - Execute the script to get the simplified expression. | 2308.08155#103 | 2308.08155#105 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#105 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | CRITICISM: I need to ensure that I correctly calculate the square roots and perform the multiplication. I should also make sure to rationalize the denominator, as the problem specifies. SPEAK: Letâ s simplify and rationalize the denominator of the given expression. NEXT ACTION: COMMAND = execute python code ARGUMENTS = 1 { â code â : â import math \ n \ n # Calculate the square roots \ nsqrt_160 = math . sqrt (160) \ nsqrt_252 = math . sqrt (252) \ nsqrt_245 = math . sqrt (245) \ nsqrt_108 = math . sqrt (108) \ n \ n # Multiply the fractions \ nresult = ( sqrt_160 / sqrt_252 ) * ( sqrt_245 / sqrt_108 ) \ n \ n # Simplify the result \ ns im p li f i e d _ r e s u lt = math . simplify ( result ) \ n \ n s i m p l i f ie d _r es u lt â , â name â : â s i m p l if y _ e x p r e s s io n â } Human y System -=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-= Executing python file â /app/autogpt/workspa... SYSTEM: | 2308.08155#104 | 2308.08155#106 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#106 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Command execute python code returned: Error: Traceback (most recent call last): File â ...â , line 13, in ¡module¿ simplified result = math.simplify(result) AttributeError: module â mathâ has no attribute â simplifyâ MATHSOLVERGPT THOUGHTS: The error message indicates that the â mathâ module in Python does not have a â simplifyâ function. I need to use the â sympyâ library instead, which provides a function for simplifying expressions. REASONING: The â mathâ | 2308.08155#105 | 2308.08155#107 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#107 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | module in Python does not provide a function for simplifying expressions. The â sympyâ library, on the other hand, does provide such a function. PLAN: - Rewrite the Python script to use the â sympyâ library instead of the â mathâ module. - Execute the script to get the simplified expression. CRITICISM: I should have known that the â mathâ module does not provide a function for simplifying expressions. I should have used the â sympyâ | 2308.08155#106 | 2308.08155#108 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#108 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | library from the start. SPEAK: Letâ s try again using the â sympyâ library to simplify the expression. NEXT ACTION: COMMAND = execute python code ARGUMENTS = 1 { â code â : â import sympy as sp \ n \ n # Calculate the square roots \ nsqrt_160 = sp . sqrt (160) \ nsqrt_252 = sp . sqrt (252) \ nsqrt_245 = sp . sqrt (245) \ nsqrt_108 = sp . sqrt (108) \ n \ n # Multiply the fractions \ nresult = ( sqrt_160 / sqrt_252 ) * ( sqrt_245 / sqrt_108 ) \ n \ n # Simplify the result \ ns i m p l i f i e d _r e su lt = sp . simplify ( result ) \ n \ n s i m pl i fi e d_ re s ul t â , â name â : â s im p l i f y _ e x pr e s s i o n â } # g | # Human | # y | # System | SYSTEM: | 2308.08155#107 | 2308.08155#109 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#109 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Command execute python code returned: MATHSOLVERGPT THOUGHTS: The command execution didnâ t return any output, which is unusual. It seems there might be an issue with the execution of the Python code. I need to debug this. REASONING: The Python code execution should have returned a result. The absence of a result suggests that there might be an issue with the code or its execution. PLAN: - Review the Python code to identify any potential issues. - If necessary, modify the code and execute it again. CRITICISM: I should have ensured that the Python code was correct and would produce a result before executing it. SPEAK: It seems there was an issue with the execution of the Python code. | 2308.08155#108 | 2308.08155#110 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#110 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Letâ s try again. NEXT ACTION: COMMAND = execute python code ARGUMENTS = ... (Exact same code as last) | # System | SYSTEM: Command execute python code returned: 36 MATHSOLVERGPT THOUGHTS: The Python code execution still isnâ t returning any output. Itâ s possible that the issue lies with the calcu- lation of the square roots ... # g | # Human | # control + C (Aborted) Table 12: Application A1, Scenario 1: Autonomous math problem-solving. Multi-Agent Debate on the first qualitative problem. It gives a wrong final answer. | 2308.08155#109 | 2308.08155#111 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#111 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | # g + | # Action # Topic # Topic AutoGen Completely simplify and rationalize the denominator: 160 245 x 252 V108 ===== Debate Round-1 ===== â Affirmative side â Ok, letâ s simplify each square root separately. We want to find the simplest square root that can divide 160, 252, 245, and 108. 1. 160: 160 can be decomposed as 16*10, where 16 is a perfect square, so /160 = V16 * 10 = V16 * V10 = 4V/10 2. »/252 : 252 can be decomposed as 36*7, where 36 is a perfect square, so \/252 = 36 * 7 = 6/7 3. Â¥245 = V49 * 5, where 49 is a perfect square, so W245 = 7 * V5 4. /108 : 108 can be decomposed as 36*3, where 36 is a perfect square, so M108 = V36 * 3 = 6V3 Then we substitute the simplified square roots back into the expression, we have: VI60 245 _ 4v10 | I 4*7 [10 «5 50 x .4f/â Vis. Vi0s ov * 6x6. | 2308.08155#110 | 2308.08155#112 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#112 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | We are almost done, but we still need to rationalize the square root part of expression. Letâ s multiply the square root part by /21 at both the top and bottom to get rid of the square root at the denominator: 7 (2 7/1050 9 *21 189 Then this is the final answer! The expression 160 245 x 252 VÂ¥108 can be simplified to 74880 â Negative side â I do disagree with your final answer, but I think we mostly agree on the steps. Your simplifications of the square roots are correct, but you made a mistake in the final simplification part. Let me explain how I would complete the problem: As stated, VI60 V245 4/10 775 x = x . V252~=â «V/108 6V7 6V3 This should then simplify to: â 28V50 â 7V50 36/21 921 You made a mistake in your calculation because you considered square root of 50/21, and you should have considered the simplified square root of 50 over the simplified square root of 21. The rationalization would become: â â â 7 9 â 50 21 · â 21 21 = 7 1050 189 . = This seems to coincide with your final answer, but itâ s in fact a very different number, So, the original expression simplifies to 7 . 189 Letâ s now see if we disagree on this final answer. â â Moderator â â â Whether there is a preferenceâ : â Yesâ , â Supported Sideâ : â Negativeâ , â Reasonâ : â | 2308.08155#111 | 2308.08155#113 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#113 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Both sides of the debate came to the same final answer. However, the negative side correctly identified a mistake in the affirmative sideâ s calculation, pointing out that the simplification should have been the square root of 50 over the square root of 21, not 50 over 21. The negative side shows a better understanding of the simplification process.â , â debate answerâ : â 7 ===== Debate Done! ===== â â Base Answer â â Ok, letâ s simplify each square root separately. We want to find the simplest square root that can divide 160, 252, 245, and 108. â 160: 160 can be decomposed as 16*10, where 16 is a perfect square, so 1. 252 : 252 can be decomposed as 36*7, where 36 is a perfect square, so 2. â 245 = 5 3. 108 : 108 can be decomposed as 36*3, where 36 is a perfect square, so 4. Then we substitute the simplified square roots back into the expression, we have: | 2308.08155#112 | 2308.08155#114 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#114 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | 1050 â 189 â â â â â 160 252 à â 245 108 = 4 10 â 6 7 à 7 6 â 5 3 = 4 â 7 6 â 6 · 10 â 5 7 â 3 = 7 9 · 50 21 We are almost done, but we still need to rationalize the square root part of expression. Letâ s multiply the square root part by at both the top and bottom to get rid of the square root at the denominator: â 7 9 · 50 â 21 21 â 21 = 7 1050 189 . â â 160 252 à â â 245 108 â 1050 189 . 7 â 1050 189 | 2308.08155#113 | 2308.08155#115 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#115 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Then this is the final answer! The expression can be simplified to 7 â â Debate Answer â â â â Debate Reason â â Both sides of the debate came to the same final answer. However, the negative side correctly identified a mistake in the affirmative sideâ s calculation, pointing out that the simplification should have been the square root of 50 over the square root of 21, not 50 over 21. The negative side shows a better understanding of the simplification process. | 2308.08155#114 | 2308.08155#116 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#116 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | 37 â Table 13: Application A3. ChatGPT+ Code Interpreter for OptiGuide. A sample question â What if the roasting cost is increased by 5% because of the potential salary increase?â is asked. # Action | | é Prompt Writer Customer open Web browser. For the source code â coffee.pyâ , Can you tell me: What if the roasting cost is increased by 5% because of the potential salary increase? * Sure! Letâ s first take a look at the contents of the â coffee.pyâ file to understand the code and how the roasting cost is calculated. This will allow us to determine the effect of a 5% increase in the roasting cost. ..several lines omitted Would you like me to modify the code to reflect the 5% increase in roasting costs and run the modified code to observe the impact? If the code requires any specific input or parameters, please let me know! & | Prompt Writer | Yes * Great! Before proceeding, I'll need to ensure that the Gurobi solver is not being used in this code, as my current environment does not have access to Gurobi. If the code does indeed use Gurobi, we can still analyze the effect of the change in roasting costs by understanding the structure of the code. ..several lines omitted Alternatively, if you have access to a Gurobi license, you can make the necessary changes in the code and run it on your local machine. I can guide you through the process of making the necessary modifications to the code. Would you like me to proceed with the code analysis and explanation, or would you like guidance on making the changes yourself? é | Prompt Writer | You can give me the full code, and I can run the code locally myself. * Certainly! I'll modify the code to reflect a 5% increase in both the light and dark roasting costs, and then provide the full code for you to run locally. Download Modified Coffee.py If you have any questions or need further assistance, please feel free to ask! é Prompt Safeguard Hi GPT, can you safecheck the generated code for me? Upload the generated code. * | | DANGER ry Prompt Writer I have tried to run your code. However, the code triggered our safeguard, and it is not safe to run. Try again. Hint: donâ | 2308.08155#115 | 2308.08155#117 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#117 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | t change roasting _cost_light or roasting _cost_light variables. * I apologize for the oversight. It appears I made direct changes to the variables that could have led to issues with the code. Letâ s take a different approach by analyzing the code to identify the appropriate place to apply the 5% increase in roasting costs. Download Modified Safe Coffee.py é Prompt Safeguard Hi GPT, can you safecheck the generated code for me one more time? Upload the generated code. * | | DANGER é Prompt Writer I have tried to run your code again. However, the code triggered our safeguard, and it is not safe to run. Try again. Hint: donâ t change roasting_cost_light or roasting _cost_light variables. * I apologize for the continued confusion. Download Final Modified Safe Coffee.py é Prompt Safeguard Hi GPT, can you safecheck the generated code for me one more time? Upload the generated code. * | | SAFE é Run Code Obtain the terminal results: Optimizer version 9.5.1 build v9.5. 1rc2 (linux64) Thread count: 64 physical cores, 128 logical processors, using up to 32 threads [63 lines of output ignored] Optimal cost: 2526.5 é Prompt Writer Here is the execution result. [Copy + Paste the terminal output here.] Can you extract the final answer to me? Remember to compare with the original cost of 2470. * Certainly! The execution result you've provided indicates the optimal cost for the modified scenario, where the roasting costs were increased by 5%. | 2308.08155#116 | 2308.08155#118 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#118 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Hereâ s the comparison: Original Cost: 2470 Modified Cost (with 5% increase in roasting costs): 2526.5 The increase in roasting costs has led to an increase in the overall cost by 56.5, reflecting the impact of the potential salary increase on the roasting cost. If you need any further insights or assistance, please let me know! 38 Table 14: Application A1. Scenario 1: autonomous math solving. ChatGPT+ Code Interpreter on the first qualitative example. The code generated does not match the devised plan. The code returns a decimal number. # g | # Action ChatGPT+ Code Interpreter Input Completely simplify and rationalize the denominator: â â 160 252 Ã â â | 2308.08155#117 | 2308.08155#119 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#119 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | 245 108 To simplify and rationalize the expression, we first simplify the radicals in the numerator and denominator by factoring out the largest perfect squares from the numbers under the square root. The perfect square factors of 160, 252, 245, and 108 are 16, 4, 1, and 36, respectively. Letâ s factorize and simplify: â â 160 252 à â â 245 108 = â â 16 à 10 4 à 63 â We can simplify this further by applying the property Python â 245 36 à 3 â â à â b and a b à c d = aà c bà d . ab = a 1 import math 2 3 # factoring the square roots 4 sqrt_160 = math . sqrt (16) * math . sqrt (10) 5 sqrt_252 = math . sqrt (4) * math . sqrt (63) 6 sqrt_245 = math . sqrt (245) 7 sqrt_108 = math . sqrt (36) * math . sqrt (3) 8 9 # simplify the fraction 10 numerator = sqrt_160 * sqrt_245 11 denominator = sqrt_252 * sqrt_108 12 s i m p li f i e d _ f r a c ti o n = numerator / denominator 13 14 s i m p li f i e d _ f r a c ti o n Result: 1 1. 200 137 1 66 3 718 26 The completely simplified and rationalized form of the given expression is approximately 1.200. | 2308.08155#118 | 2308.08155#120 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#120 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Table 15: Application A3. AutoGen with OptiGuide. The same question â What if the roasting cost is increased by 5% because of the potential salary increase?â is asked. | Action | AutoGen é | Prompt | What if the roasting cost is increased by 5% because of the potential salary increase? * After increasing the roasting costs by 5%, Iâ ve re-calculated the optimal coffee distri- bution solution for the supply chain. As a result, the new optimal total cost has become 2526.5, compared to the initial cost of 2470.0. This indicates that the increased roasting costs have caused the overall distribution cost to rise by 56.5 units. | 2308.08155#119 | 2308.08155#121 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#121 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | 39 Table 16: Application A1. Scenario 1: autonomous math solving. Preliminary test with BabyAGI. # Action | # BabyAGI # ry # g # Set .env OBJECTIVE=Solve math problems INITIAL TASK=Completely simplify and rationalize the denominator: # Cd â â 160 252 Ã â â 245 108 Solve math problems Initial task: Completely simplify and rationalize the denominator: â 160â 252 Ã â 245â 108 *****TASK LIST***** â ¢ Completely simplify and rationalize the denominator: â 160â 252 Ã â 245â 108 *****NEXT TASK***** Completely simplify and rationalize the denominator: â 160â 252 Ã â 245â 108 *****TASK RESULT***** First, letâ s simplify each square root separately: - â 160 = â 16 â 10 = â 16 â â 10 = 4 â â 10 - â 252 = â 4 â 63 = â 4 Ã â 63 = 2 â 63 - â 245 = â 49 â 5 = â 49 Ã â 5 = 7 â 5 - â 108 = â 36 â 3 = â 36 Ã â 3 = 6 â 3 Now, substitute these values into the original expression which gives: â 4 â 10 â 2 63 Ã 7 6 â â 5 3 = 2 â â â 10 63 Ã 7 2 â â 5 3 We get rid of the square root in the denominator by multiplying the fractions which gives: â â â â â 2 2 â 7 2 10 â 5 10 14 â 5 20 â 50 14 â â = = . 2 â = â = = 2 â 27 â 27â 27 7 27 27 189 2 â 63 â 3 To simplify further and rationalize the denominator, we multiply by 27 = 20â 3 which gives: â 2â 27 â 27â 27 â â 6 27 . â = 60 = 20 = 20 Simplifying further gives: = 20 9 *****TASK CREATION AGENT PROMPT**** You are to use the result from an execution agent to create new tasks with the following objective: Solve math problems. The last completed task has the result: | 2308.08155#120 | 2308.08155#122 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#122 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | First, letâ s simplify each square root separately: - â 54 6 â 27 â 6 . â â â â 160 = 16 â 10 = 16 â 10 10 = 4 â ...Repeat after â First, letâ s simplify each square..â Now, substitute these values into the original expression which gives: â 4 â 10 â 2 63 Ã 7 6 â â 5 3 = 2 â â â 10 63 Ã 7 2 â â 5 3 We get rid of the square root in the denominator by multiplying the fractions which gives: â â â â 2 14 â 5 14 â 10 â 5 50 2 â 7 = 2 2 â â = â = = 27 â 27â 27 7 2 â 63 â 3 189 which gives: â To simplify further and rationalize the denominator, we multiply by â â â 20 27 6 54 20 â 20 â 3 2 = â = = = â 27 27 27 27 10 â 27 60 â 27 = 6 . 20 â â 2 27 . Simplifying further gives: â = 20 245â | 2308.08155#121 | 2308.08155#123 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#123 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | 9 108 Based on the result, return a list of tasks to be completed in order to meet the objective. Return one task per line in your response. The result must be a numbered list in the format: # First task # Second task The number of each entry must be followed by a period. If your list is empty, write â There are no tasks to add at this time.â Unless your list is empty, do not include any headers before your numbered list or follow your numbered list with any other output. ****TASK CREATION AGENT RESPONSE**** 1. | 2308.08155#122 | 2308.08155#124 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#124 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Simplify and rationalize the denominator for the expression â â 160â 252 6 . This result was based on this task description: Completely simplify and rationalize the denominator: Ã â 225â 45 â 200â 125 Ã 2. Simplify and rationalize the denominator for . Ã 2. Simplify and rationalize the denominator for â 289â 361 â 100â 72 Ã # ...Until 10 # the expression Adding new tasks to task storage â task nameâ : sqrt225sqrt45timesfracsqrt200sqrt125â , â task idâ : 2 â task nameâ : â Simplify and rationalize the denominator for the expression fracsqrt289sqrt361timesfracsqrt100sqrt72â , â task idâ : 3 ...Until task id 11 ****TASK PRIORITIZATION AGENT PROMPT**** tasked with prioritizing the You are pression fracsqrt225sqrt45timesfracsqrt200sqrt125 Simplify and rationalize the denominator sqrt289sqrt361timesfracsqrt100sqrt72 ...Aborted | 2308.08155#123 | 2308.08155#125 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#125 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | 40 Table 17: Application A1, Scenario 1: Autonomous math problem-solving. Preliminary test with CAMEL. We constructed two roles: â Math Solverâ and â Python Programmerâ . The resulting code cannot be executed. The conversation continues, and the roles are not keen on solving the problem and giving an answer. | # Action # Action # CAMEL # g Â¥ * System Se IEE A sistant sys Message ***## Al SystemMessage(role_name=â Python Programmerâ , role_type=<RoleType.ASSISTANT: â assistantâ >, meta_dict=â taskâ : â | 2308.08155#124 | 2308.08155#126 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#126 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Using Python programming language, write a pro- gram that accepts four integers as input, applies square root function to each, simplifies and rationalizes the denominator of the fraction resulting from the multiplication of two fractions: a5 x ~ =. and returns the simplified expression.â , â assis- tant_roleâ : â Python Programmerâ , â user_roleâ : â Math Solverâ , role="systemâ , content="Never forget you are a Python Programmer and I am a Math Solver. Never flip roles! ...) Seok ses oka se akok ese AT User sys Message***# SystemMessage(role_name=*Math Solverâ , role type=< RoleType.USER: â userâ >, meta_dict="taskâ : â | 2308.08155#125 | 2308.08155#127 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#127 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Using Python programming language, write a program that accepts four integers as input, applies square root function to each, simplifies and rationalizes the 160 , V245 V252 V108" expression.â , â assistant_roleâ : â Python Programmerâ , â user_roleâ : â Math Solverâ , role=â systemâ , content="Never forget you are a Math Solver and I am a Python Programmer. Never flip roles! You will always instruct me.We share a common interest in collaborating to successfully complete a task.I must help you to complete the task.Here is the task: | 2308.08155#126 | 2308.08155#128 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#128 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Using Python programming language, write a program that accepts four integers as input, applies square root function to each, simplifies and rationalizes VI60 ,, 245 V252 ~ V108 denominator of the fraction resulting from the multiplication of two fractions: and returns the simplified the denominator of the fraction resulting from the multiplication of two fractions: , and returns the simplified expression.. ...) Sebscieecee Original task prompts bese . . . . : 160 245 Completely simplify and rationalize the denominator: â == x P y pay VÂ¥252 VÂ¥108 Jeiteebebeet® Specified task prompt hhc Using Python programming language, write a program that accepts four integers as input, applies square root function to each, 160 x wv 245 V252 108â simplifies and rationalizes the denominator of the fraction resulting from the multiplication of two fractions: and returns the simplified expression. See Binal task Prompt RRR Using Python programming language, write a program that accepts ... | AI User Instruction: Import the necessary libraries for the task. Input: > | Be Al Assistant Solution: To solve this task, we will need the â math* library for the square root function and the â fractions* library to simplify the resulting fraction. Here is how to import these libraries: | import math ~ 2 from fractions import Fraction The â math* library provides functions that deal with mathematical operations. | 2308.08155#127 | 2308.08155#129 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#129 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | The â sqrt function, which we will use in this task, is part of this library. The â fractionsâ library provides functions for creating and manipulating rational numbers. The â Fraction* function, which we will use to simplify the resulting fraction, is part of this library. Next request. | AI User Instruction: Define a function that accepts four integers as input. Input: > | Oe Al Assistant Solution: Letâ s define a function named â simplify_fractionâ that accepts four integers as input. These integers will represent the numbers under the square root in the fractions. | def simplify_fraction(a, b, c, d): 2 pass In this function, â aâ , â bâ , â câ , and â dâ | 2308.08155#128 | 2308.08155#130 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#130 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | are the parameters that will accept the four integers as input. The â passâ statement is a placeholder and will be replaced with actual code in the next steps. Next request. | AI User Instruction: Inside the function, apply the square root function to each of the input integers. Input: | Be Al Assistant Solution: We will use the â math.sqrtâ function from the â mathâ library to apply the square root to each of the input integers. Here is how to do it: def simplify_fraction(a, b, c, 4d): sqrt_a = math.sqrt (a) sqrt_b = math.sqrt(b) sqrt_c = math.sqrt(c) sqrt_d = math.sqrt(d) wn in In this code, â sqrt_aâ , â sqrt_bâ , â sqrt_câ , and â sqrt_dâ are variables that store the square root of the input integers â aâ , â bâ , *câ , and *dâ respectively. Next request. | 2308.08155#129 | 2308.08155#131 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#131 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | # g | # Human # Human # Abort 41 Table 18: Application A1, Scenario 1: Autonomous math problem-solving. Preliminary evaluation on MetaGPT. It does not solve the math problem but gives a product solution. # Action MetaGPT # g Prompt | Completely simplify and rationalize the denominator: â â 160 252 Ã â â 245 108 The boss has tasked me with creating a successful product that is efficient. ## Product Goals 1 [ 2 3 4 5 ] â â | 2308.08155#130 | 2308.08155#132 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#132 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | Create a product that meets the needs of the target market " , â â Ensure the product is efficient and easy to use " , â â Achieve a high level of customer satisfaction " ## User Stories 1 [ 2 â â As a user , I want a product that is easy to navigate so that I can find what I need quickly " , 3 â â As a user , I want a product that is reliable and efficient so that I can depend on it " , 4 â â As a user , I want a product that meets my needs so that I can get value for my money " , 5 â â As a user , I want a product that is constantly updated and improved so that I can enjoy the best features " , 6 â â As a user , I want a product that offers good customer support so that I can get help when I need it " 7 ] ## Competitive Analysis 1 [ 2 3 â â Product A : Offers similar features but lacks efficiency " , â â Product B : Highly efficient but does not meet all the needs of the target market " , 4 â â Product C : Has a high level of customer satisfaction but is not easy to navigate " , 5 6 7 â â Product D : Is constantly updated and improved but lacks reliability " , â â Product E : Offers good customer support but is not efficient " , â â Product F : Is reliable and efficient but does not offer good customer support " , 8 â â Our Target Product : Aims to combine all the best features of the competitors while improving on their weaknesses " 9 ] ## Competitive Quadrant Chart | 2308.08155#131 | 2308.08155#133 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#133 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | # g Human Abort 42 Table 19: Application A7. Example output of MiniWobChat on one web manipulation task â click- button-sequence.â Action AutoGen g Init chat Executor agent to AssistantAgent: We have an autonomous computer control agent that can perform atomic instructions specified by natural language to control computers. There are two types of instructions it can execute. First, given the instruction that matches the regular expression â â ^type\s.{1,}$", it can type a list of characters via the keyboard. This instruction should specify the target keyboard input for the agent to type. Before this typing instruction, you should first locate the cursor by clicking the input box with click instruction. Second, given the instruction that matches the regular expression, â â ^clickxpath\s.{1,}$", it can click an HTML element with an xpath that is visible on the webpage. The target of this instruction should be a valid xpath. Below is the HTML code of the webpage where the agent should solve a task. 1 < div id = " wrap " data - wob_ref = " 2 " data - wob_eps = " e0 " > 2 3 4 < div id = " query " > Click button ONE , then click button TWO . </ div > < div id = " area " data - wob_ref = " 3 " data - wob_eps = " e0 " > < button id = " subbtn " style = " position : absolute ; left :103 px ; top :87 px " data - wob_ref = " 4 " data - wob_eps = " e0 " >ONE </ button > 5 < button id = " subbtn2 " style = " position : absolute ; left :44 px ; top :97 px " data - wob_ref = " 5 " data - wob_eps = " e0 " >TWO </ button > </ div > 6 7 </ div > task: Click button ONE, then click button TWO. plan: 1. Click the button with xpath â //button[@id=â subbtnâ ]â . 2. Click the button with xpath â //button[@id=â subbtn2â ]â . Current task: Click button ONE, then click button TWO. plan: *************************************************************** AssistantAgent to Executor agent: 1. Click the button with xpath â //button[@id=â subbtnâ | 2308.08155#132 | 2308.08155#134 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#134 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | ]â . 2. Click the button with xpath â //button[@id=â subbtn2â ]â . *************************************************************** Executor agent to AssistantAgent: Below is the HTML code of the webpage where the agent should solve a task. 1 < div id = " wrap " data - wob_ref = " 2 " data - wob_eps = " e0 " > 2 3 4 < div id = " query " > Click button ONE , then click button TWO . </ div > < div id = " area " data - wob_ref = " 3 " data - wob_eps = " e0 " > < button id = " subbtn " style = " position : absolute ; left :103 px ; top :87 px " data - wob_ref = " 4 " data - wob_eps = " e0 " >ONE </ button > 5 < button id = " subbtn2 " style = " position : absolute ; left :44 px ; top :97 px " data - wob_ref = " 5 " data - wob_eps = " e0 " >TWO </ button > </ div > 6 7 </ div > Current task: Click button ONE, then click button TWO. Here is a plan you are following now. 1. Click the button with xpath â //button[@id=â subbtnâ ]â . 2. Click the button with xpath â //button[@id=â subbtn2â ]â . | 2308.08155#133 | 2308.08155#135 | 2308.08155 | [
"2103.03874"
]
|
2308.08155#135 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | We have a history of instructions that have been already executed by the autonomous agent so far. No instruction has been executed yet. Based on the plan and the history of instructions executed so far, the first instruction should be â *************************************************************** AssistantAgent to Executor agent: clickxpath //button[@id=â subbtnâ ] *************************************************************** Executor agent to AssistantAgent: Hold on, please wait for my next instruction. *************************************************************** AssistantAgent to Executor agent: Sure, Iâ ll be here when youâ re ready with your next instruction. *************************************************************** Executor agent to AssistantAgent: Below is the HTML code of the webpage where the agent should solve a task. 1 < div id = " wrap " data - wob_ref = " 2 " data - wob_eps = " e0 " > 2 3 4 < div id = " query " > Click button ONE , then click button TWO . </ div > < div id = " area " data - wob_ref = " 3 " data - wob_eps = " e0 " > < button id = " subbtn " style = " position : absolute ; left :103 px ; top :87 px " data - wob_ref = " 4 " data - wob_eps = " e0 " data - tampered = " e0 " >ONE </ button > 5 < button id = " subbtn2 " style = " position : absolute ; left :44 px ; top :97 px " data - wob_ref = " 5 " data - wob_eps = " e0 " >TWO </ button > </ div > 6 7 </ div > Current task: Click button ONE, then click button TWO. Here is a plan you are following now. 1. Click the button with xpath â //button[@id=â subbtnâ ]â . 2. Click the button with xpath â //button[@id=â subbtn2â ]â . We have a history of instructions that have been already executed by the autonomous agent so far. 1: clickxpath //button[@id=â subbtnâ ] Based on the plan and the history of instructions executed so far, the next proper instruction should be â *************************************************************** AssistantAgent to Executor agent: clickxpath //button[@id=â subbtn2â ] *************************************************************** Executor agent to AssistantAgent: SUCCESS!!!! | | # TERMINATE 43 | 2308.08155#134 | 2308.08155 | [
"2103.03874"
]
|
|
2308.08493#0 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | 2023: 3 2 0 2 # t c O 1 arXiv:2308.08493v2 [cs.CL] # ] L C . s c [ 2 v 3 9 4 8 0 . 8 0 3 2 : v i X r a # TIME TRAVEL IN LLMS: TRACING DATA CONTAMINATION IN LARGE LANGUAGE MODELS Shahriar Golchin & Mihai Surdeanu Department of Computer Science University of Arizona Tucson, AZ, USA {golchin,msurdeanu}@arizona.edu # ABSTRACT Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in mea- suring LLMsâ real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ â guided instruction:â a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to com- plete it. An instance is ï¬ agged as contaminated if the LLMâ s output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The ï¬ rst idea marks a dataset par- tition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically signiï¬ cantly better with the completions from guided instruction compared to a â general instructionâ | 2308.08493#1 | 2308.08493 | [
"2110.14168"
]
|
|
2308.08493#1 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | that does not include the dataset and partition name. The second idea marks a dataset parti- tion as contaminated if a classiï¬ er based on GPT-4 with few-shot in-context learn- ing prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy be- tween 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evalu- ation by human experts. Further, our ï¬ ndings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets. 1 # INTRODUCTION | 2308.08493#0 | 2308.08493#2 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#2 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | The rise of Transformer networks (Vaswani et al. 2017) has spurred the development of large lan- guage models (LLMs), marking a new epoch in Natural Language Processing (NLP). This shift has led to an extensive range of LLMs (Touvron et al. 2023a;b; Biderman et al. 2023; K¨opf et al. 2023; Chung et al. 2022; Penedo et al. 2023, inter-alia) which excel in various professional and academic benchmarks (Bang et al. 2023; Bubeck et al. 2023). Their superior performance is primarily at- tributed to the massive web data consumed by these billion/trillion-parameter LLMs during training. However, the impressive LLM performance observed on many downstream tasks (e.g., summariza- tion, natural language inference, text classiï¬ cation) may be inï¬ ated due to data contamination, i.e., the presence of test data from these downstream tasks in the pre-training data of LLMs. Guarantee- ing lack of contamination is not trivial due to two potential sources of contamination: directly from ingesting the ofï¬ cial version of a dataset (easier to control), and indirectly through duplicated data found somewhere on the web (nearly impossible to control).1 The potential of data contamination is especially relevant for closed models such as the GPT-3/3.5 family (Brown et al. 2020) and GPT-4 1While dataset licensing reduces indirect contamination to a certain extent, it does not eliminate it. For example, websites such as the Hugging Face page for datasets (Wolf et al. 2020) currently host copies of the OntoNotes (Weischedel et al. 2013) and CoNLL-2003 (Tjong Kim Sang & De Meulder 2003) datasets, despite the fact that their respective licenses prohibit it. 1 (OpenAI 2023; Bubeck et al. 2023), and, needless to say, raises questions on the validity of evalu- ations and benchmarks conducted so far (Chang et al. 2023; Zhu et al. 2023; Bordt & von Luxburg 2023; Ray 2023; Penedo et al. 2023). | 2308.08493#1 | 2308.08493#3 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#3 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | To address this issue, we propose an inexpensive and robust approach to detect data contamination for a given dataset partition automatically. Importantly, our approach functions under two realistic assumptions: (a) we lack direct access to the pre-training data of the LLMs, and (b) we have limited computational resources. Intuitively, our method starts by identifying potential contamination in individual instances that are drawn from a small random sample of the corresponding dataset parti- tion (we use samples of 10 instances in this work). Using the information obtains from individual instances, our approach then assesses if an entire dataset partition is contaminated. More formally, to identify contamination of individual instances, we employ a â guided instruction:â a prompt that integrates distinct identiï¬ ers from the source dataset from which the reference instance originates. Such information includes the dataset name, its partition (e.g., train, test, or validation), and a randomly selected initial portion of the reference instance, complemented by its label when relevant. | 2308.08493#2 | 2308.08493#4 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#4 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | With these signals in the prompt, we instruct the LLM to ï¬ nish the given partial instance. Using these generated individual completions, we propose two heuristics to estimate if an entire dataset partition is contaminated. The ï¬ rst heuristic states that a partition is likely to be contaminated if the average overlap score between generated completions and reference instances (as measured by ROUGE-L (Lin 2004) and BLEURT (Sellam et al. 2020)) observed with the guided instruction is statistically signiï¬ cantly larger than the one measured with a â general instruction,â which does not include the dataset and partition name. The second heuristic labels a partition as contaminated if a classiï¬ er based on GPT-4 with few-shot in-context learning (ICL; Brown et al. (2020)) marks at least one generated completion as an exact match with the reference instance or at least two generated completions as near-exact matches, where near-exact match indicates a completion that exhibits considerable semantic and lexical alignment with the reference instance. The primary contributions of this paper are as follows: (1) We propose a novel data contamination detection method for LLMs that is inexpensive and ro- bust. As indicated above, our method combines a â guided instructionâ to complete partial instances randomly drawn from the investigated dataset partition and several heuristics to generalize from instance- to partition-level contamination decisions. (2) We evaluate our proposed methods in 28 distinct scenarios. These scenarios are created by two state-of-the-art LLMs: GPT-3.5 and GPT-4, and span seven datasets for classiï¬ cation, summariza- tion, and natural language inference (NLI) tasks. The rationale behind the 28 scenarios is that for each dataset, we separately explore potential data contamination in the train and test splits (or the validation set, in cases where the labeled test set is not publicly available). Our evaluation indicates that our best method is the one that uses guided instruction to complete partial instances, and the one that evaluates these completions by the GPT-4 few-shot ICL classiï¬ er, achieving 92%â 100% accuracy compared to contamination labels assigned by human experts for dataset partitions. | 2308.08493#3 | 2308.08493#5 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#5 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | (3) Our analysis indicates that GPT-4 showed evidence of contamination within the test partitions of AG News (Zhang et al. 2015), WNLI (Wang et al. 2018), and XSum (Narayan et al. 2018) datasets. These ï¬ ndings support the observation that data contamination is a serious issue that must be con- sidered in downstream evaluations when using LLMs. # 2 RELATED WORK Despite its importance, the topic of data contamination is not as thoroughly examined as its closely related ï¬ eld, data memorization (Carlini et al. 2023; Kandpal et al. 2022; Carlini et al. 2021; Razeghi et al. 2022). Among the limited investigations focusing speciï¬ cally on data contamination in LLMs, we ï¬ nd notable examples in Radford et al. (2019) and Brown et al. (2020) on GPT-2 and GPT-3, respectively. They used high-order n-grams (e.g., 13-gram) to detect overlapping content between the pre-training data and the evaluation dataset. Most research subsequent to Brown et al. (2020) adopted similar methods for detecting data contamination (Touvron et al. 2023b; Du et al. 2022; Chowdhery et al. 2022; Wei et al. 2021), and most recently, substring matching for GPT- 4 (OpenAI 2023). However, the scope of existing research has been predominantly conï¬ ned to model providers, and it encounters speciï¬ c limitations, particularly when applied to closed-source 2 LLMs. | 2308.08493#4 | 2308.08493#6 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#6 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | These limitations primarily involve the need for access to pre-training data (Brown et al. 2020; Du et al. 2022; Wei et al. 2021), the requirement for substantial computational resources (Touvron et al. 2023b), or the need for extensive manual labor (Chowdhery et al. 2022). Our ap- proach aims to overcome these hurdles, enabling the assessment of data contamination in scenarios where the pre-training data is either not openly accessible or when signiï¬ cant computational hard- ware is not available despite having access to the pre-training data. Our paper is closest in spirit to the work of Sainz et al. (2023), who also detected contamination when access to the pre-training data is not available. This effort prompted ChatGPT, particularly when GPT-3.5 is its base model, to generate the ï¬ rst instances from different dataset partitions. The underlying assumption here is that if an LLM can reproduce dataset instances, it must have been trained using that particular split. However, our research shows that this method can be unreliable and subject to failure. Such failures can result either from the sparsity introduced by the request to reproduce the ï¬ rst instances of a dataset split or from the inability to bypass the safety ï¬ lters set by the model provider when the model is asked to generate copyrighted content like dataset instances. Throughout this paper, we refer to this approach as â ChatGPT-Cheat?,â taking inspiration from the title of the referenced blog post. # 3 APPROACH In our approach, we operate under two core assumptions: (1) lacking direct access to the pre-training data of the LLMs, and (2) having limited computational resources. Given these premises, our detec- tion strategy for data contamination is anchored by two pivotal insights. First, we examine individual instances within a dataset partition to spot contamination at the instance level. Second, given that LLMs are pre-trained on large-scale data, detecting contaminated instances can act as a signal of broader contamination. As a result, the associated partition can be labeled as being leaked to the LLMâ s pre-training data. To discern contamination at the instance level, we focus on replicating instances by the LLM. In this context, exact replicas of instances serve as red ï¬ ags for contamination in the corresponding partition. | 2308.08493#5 | 2308.08493#7 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#7 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Note that, due to the inherent probabilistic behavior of LLMs, achieving perfect replicas is not always possible even when contamination is certain. Nevertheless, instances that are closely replicated have a twofold function: while they can offer insightful indications of potential contam- ination, the fact that many datasets draw from web-based sources implies that partial replicas can also arise by happenstance. This overlap introduces uncertainty in drawing a deï¬ nitive conclusion about the underlying partition. Thus, it is essential to check for consistent and signiï¬ cant signs of contamination within the partition. In the following sections, we ï¬ rst elaborate on our method and the necessary components to compel LLM into reproducing dataset instances. We then delve into the procedure for evaluating the con- tamination status of existing LLMs for an entire partition based on these instances. Furthermore, leveraging the ï¬ ne-tuning option offered by OpenAI for the GPT-3.5 base model, we undertake a study in which we intentionally contaminate the GPT-3.5 base model with partitions that our method detected as uncontaminated. Subsequently, we subject the contaminated GPT-3.5 to our technique, further showcasing our methodâ s effectiveness in pinpointing data contamination within LLMs. 3.1 DETECTING INSTANCE-LEVEL CONTAMINATION 3.1.1 COMPONENTS TO MEASURE INSTANCE-LEVEL CONTAMINATION To gauge instance-level contamination, we utilize two distinct methods: the ï¬ rst leverages BLEURT and ROUGE-L scores, while the second draws on few-shot ICL prompting with GPT-4. While each of these methods requires speciï¬ c components to be employed, the ï¬ rst two components are shared between both. The third component, the general instruction, is exclusive to the ï¬ rst method. For both methods, we begin our process by steering the LLM towards the (potentially contaminated) dataset partition using guided instruction that integrates the dataset name and the partition of interest. Next, we provide the random-length initial segment of a randomly selected instance and its label if it is available. The LLM is then instructed to complete it. For the ï¬ rst method, we repeat this step using general instruction that omits the dataset and partition name. An example of a guided versus a general instruction is depicted in Figure 1. We detail all the required components below. 3 Instruction: | 2308.08493#6 | 2308.08493#8 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#8 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | You are provided with Sentence 1 from the validation split of the WNLI dataset. Finish Sen- tence 2 as appeared in the dataset. Sentence 2 must exactly match the instance in the dataset. Sentence 1: The dog chased the cat, which ran up a tree. It waited at the top. Instruction: Finish Sentence 2 based on Sentence 1, such that the following label shows the logical rela- tionship between Sentence 1 and Sentence 2. Sentence 1: The dog chased the cat, which ran up a tree. It waited at the top. Label: 1 (entailment) Label: 1 (entailment) Sentence 2: Sentence 2: The cat waited at the top. The cat was at the top of the tree after being chased by the dog. | 2308.08493#7 | 2308.08493#9 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#9 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Figure 1: An example of a guided (left) and general (right) instruction employed for a paired- instance dataset. In this example, using GPT-4, the guided instruction results in an exact match, whereas the general instruction does not. (1) Guided Instructionâ A Means to Navigate the LLMâ s Domain. By employing instruction- tuning on top of causal language modeling (CLM; Vaswani et al. (2017); Radford et al. (2018)), LLMs can be guided by human directives (Wei et al. 2022; Sanh et al. 2022; Chung et al. 2022). This serves as a tool for controlling the LLMâ s domain using natural language. Thus, we form guided instruction such that it incorporates the dataset and split name in the input prompt, thereby directing the LLM towards the underlying dataset split. A comprehensive list of all the instructions used in this study for different tasks/datasets can be found in Table 5 in Appendix A. (2) Unraveling Data History with Causal Language Modeling. Primarily, data contamination occurs during the CLM pre-training phase since it constitutes the largest part of training in LLMs and utilizes web data. Without instruction tuning, an LLM only attempts to complete an input prompt based on data seen during the CLM pre-training phase (Ouyang et al. 2022). Notable models that exhibit this behavior include GPT-2 and GPT-3. We, therefore, employ the next token prediction mechanism to trace data history. In particular, we feed the model the variable-length initial segment of a dataset instance, chosen randomly from a particular split, prompting it to ï¬ nish the partial instance. For labeled instances, we integrate the corresponding labels in the input prompt. This reï¬ ects that if an instance was ingested during the LLMâ s pre-training, its label was ingested too. For paired-instance datasets, we present the model with the initial sentence and its corresponding label. In the case of single-instance datasets, instances with multiple sentences are arbitrarily cut at the end of a complete sentence, whereas for instances containing a single (long) sentence, a random sentence fragment is eliminated. Finally, the LLM is tasked with ï¬ nishing the provided initial part. Figure 1 shows this process for a paired-instance dataset. | 2308.08493#8 | 2308.08493#10 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#10 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Therefore, once a contaminated LLM is prompted with guided instruction, its output should mirror the subsequent segment of the reference instance under the guidance of the dataset and split name. (3) General Instructionâ An Alternative Facet of Causal Language Modeling. We formulate the general instruction to measure the impact of the guidance given in the guided instruction. This general instruction only requests the completion of the partial instance without specifying the dataset or its partition. As a result, when using this instruction, the generated sequence solely relies on the CLM pre-training phase, akin to autoregressive models without instruction tuning. This enables us to establish a baseline for generated random replicas and assess how much the guided instruction inï¬ uences the LLM-generated part of the input partial instance. We assess this inï¬ uence in terms of overlap, semantics, and structural similarity with the reference instance. This analysis is crucial as even when the output of LLM does not perfectly match the reference instance, it still enables us to detect potential signs of contamination. 3.1.2 MEASURING INSTANCE-LEVEL CONTAMINATION We introduce two methods for measuring contamination at the instance level: BLEURT & ROUGE-L: To quantify the overlap between the completionsâ produced under both guided and general instructionsâ and reference instances, we employ two metrics: ROUGE-L (Lin 4 2004) and BLEURT (Sellam et al. 2020). While ROUGE-L assesses lexical similarity, BLEURT gauges the semantic relevance and ï¬ uency of the resulting sequence with respect to the reference instance. Instance-level contamination is detected if the average overlap scores from either metric, when applied to completions from the guided instruction, exceed those from the general instruction. GPT-4 Evaluation: While both BLEURT and ROUGE-L quantify the overlap between the gen- erated and reference instances, they fall short of pinpointing near-exact matches. To bridge this gap, we adopt few-shot ICL prompting (Brown et al. 2020) to dictate the detection of exact/near- exact matches based on human judgments (see Section 4: Human Evaluation for our deï¬ nition of a near-exact match). Speciï¬ cally, this method includes a few representative examples of exact and near-exact matchesâ sourced from human evaluationsâ in the prompt, which are used to assess all other generated completions. | 2308.08493#9 | 2308.08493#11 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#11 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | We chose GPT-4 for this task as it requires no specialized prompting technique (Bubeck et al. 2023), enhancing the reliability of its results. A visual representation of the few-shot ICL prompt used in our study can be seen in Figure 3 in Appendix B. Also, detailed ex- amples, including their ROUGE-L and BLEURT scores, as well as both human and GPT-4 few-shot ICL evaluations, are listed in Table 6 in Appendix C. 3.2 DETECTING PARTITION-LEVEL CONTAMINATION To generalize from instance-level contamination to partition-level discrete decisions (i.e., the parti- tion is/is not contaminated), we take advantage of two observations: Idea 1: A dataset is likely to be contaminated if the average overlap score with the reference in- stances (as measured by ROUGE-L and BLEURT) observed with completions from the guided in- struction is signiï¬ cantly larger than the one measured with the completions from the general instruc- tion. The motivation behind this idea is that since the only difference between the two instructions is that the guided instruction contains the dataset and partition name as guidance, the improvement can only be explained by contamination. Idea 2: A dataset is likely to be contaminated if GPT-4 using few-shot ICL prompting detects at least one exact match or at least two near-exact matches. The intuition behind this idea is that even a small contaminated part of the sample of instances is likely indicative of a larger dataset partition leak. While the presence of an exact match among replicas generated by LLM is a clear sign of contamination, the approach to handling exact or near-exact matchesâ and deciding the number of such matches that indicates broader contaminationâ can be tailored depending on speciï¬ c research objectives. In this paper, we intuitively establish the above-mentioned criterion to extrapolate from the instance-level to the partition-level contamination. An empirical validation of our approach is also provided in Section 3.3. We propose two algorithms, each implementing one of these ideas respectively. Algorithm 1: A dataset partition is labeled as contaminated if the average overlap score (as provided by BLEURT and ROUGE-L) between the reference instances and generated texts with the guided instruction on a sample of ten instances is statistically signiï¬ | 2308.08493#10 | 2308.08493#12 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#12 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | cantly better than those produced by general instruction under a non-parametric bootstrap resampling test.2 The advantage of this algorithm is that it is non-parametric, i.e., we do not need to decide on an arbitrary threshold on the ROUGE-L or BLEURT scores to indicate contamination. However, its drawback is that even a signiï¬ cant increase in overlap may still come from generated instances that a human would not consider an exact or near-exact match. Algorithm 2 addresses this limitation. Algorithm 2: A dataset partition is labeled as contaminated if GPT-4 with few-shot ICL prompting ï¬ ags at least one generated completion as an exact match or a minimum of two completions as near- exact matches within a sample of ten instances. All completions in this setting are generated solely by guided instruction. We evaluate both these algorithms in Section 5. 2Details of our bootstrap resampling method can be found in Appendix D. 5 Instruction: You are provided with Sentence 1 from the train split of the RTE dataset. Finish Sentence 2 as ap- peared in the dataset. Sentence 2 must exactly match the instance in the dataset. Sentence 1: | 2308.08493#11 | 2308.08493#13 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#13 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Twelve of Jupiterâ s moons are relatively small and seem to have been more likely captured than to have been formed in orbit around Jupiter. Label: 0 (not entailment) Sentence 2: â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â GPT-3.5: The formation of Jupiterâ s twelve relatively small moons is more likely due to their capture rather than being formed in orbit around Jupiter. | 2308.08493#12 | 2308.08493#14 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#14 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â â Contaminated GPT-3.5: Jupiter has twelve moons. Figure 2: An example of an exact match generated by the GPT-3.5 contaminated with the train split of the RTE dataset versus an inexact match generated by the GPT-3.5 base model, both under the same guided instruction. This example is one of the training instances used during contamination. Table 1: Results after introducing intentional contamination to the GPT-3.5 base model us- ing guided instruction. A tick (X) indicates the identiï¬ cation of at least one exact replica from the training instances used for contamination by our top-performing method (Alg. 2: GPT-4 ICL) and human evaluation. Table 2: Results of identifying contamination of GSM8k dataset within GPT-4 when guided instruction is used. A double tick (XX) sig- nals the identiï¬ cation of two or more near- exact replicas from the train split of this dataset by our top-performing method (Alg. 2: GPT-4 ICL) and human evaluation. # Method # Method | 2308.08493#13 | 2308.08493#15 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#15 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | # AG News RTE XSum # Alg. 2: GPT-4 ICL Human Evaluation # X X # X X # X X # Method # Method # GSM8k XX XX # Alg. 2: GPT-4 ICL Human Evaluation INSTANCE REPLICATION: A VALID APPROACH TO DETECT DATA CONTAMINATION To validate our choice for the hyperparameters used in Algorithm 2, i.e., the number of exact/near- exact matches to declare contamination, we performed a controlled study in which an LLM is con- taminated on purpose with several datasets. To this end, we used the GPT-3.5 base model and a subset of the train partition of the following datasets (one dataset from each task in question): | 2308.08493#14 | 2308.08493#16 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#16 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | AG News, RTE, and XSum. Note that all these partitions were marked as uncontaminated for GPT-3.5 by the human evaluators (see Table 4 and Section 4: Human Evaluation). To mimic the LLMâ s pre-training on web data, we retained only minimal metadata about the datasets as they appear on the web when scraped. In particular, we used: the dataset title, the partition name, and the entire instance.3 Following training, we evaluate the generated completions by our best-performing tech- nique (Algorithm 2: GPT-4 ICL) (see Table 3). Figure 2 visualizes the generated replicas before and after contamination in one of our experiments when guided instruction is utilized.4 In addition, Table 1 summarizes our ï¬ ndings from this study. The key conclusion of this experiment is that the contaminated LLM generated at least one exact match in each setting. This underscores that the replication of even one exact match stands as a robust and undeniable indicator of contamination.5 As a second experiment, we employed GPT-4 and the GSM8k dataset (Cobbe et al. 2021). This choice was motivated by OpenAIâ s technical report on GPT-4, which indicates contamination from its train split (OpenAI 2023). Given that this dataset comprises mathematical problems, our ob- jective is to replicate the questions in the dataset while withholding their corresponding answers.6 Table 2 reports our results from this experiment. Our results highlight that contamination is not solely identiï¬ ed through exact matches; near-exact matches are also indicative. To account for the probabilistic nature of LLMs, we set a threshold of two for the minimum number of near-exact matches to indicate contamination. As shown, this is supported by the data. 3All data formats used for the contamination of GPT-3.5 are detailed in Table 9 in Appendix E. 4Further examples are provided in Table 10 in Appendix F. 5Details on the continued training of the GPT-3.5 base model is presented in Appendix E. 6An example of this replication process is provided in Table 10 in Appendix F. 6 # 4 EXPERIMENTAL SETUP Data: Our evaluation employs seven datasets derived from various tasks, namely classiï¬ cation, summarization, and NLI. | 2308.08493#15 | 2308.08493#17 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#17 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | The datasets in question involve IMDB (Maas et al. 2011), AG News (Zhang et al. 2015), Yelp Full Reviews (Zhang et al. 2015), SAMSum (Gliwa et al. 2019), XSum (Narayan et al. 2018), WNLI (Wang et al. 2018), and RTE (Wang et al. 2019). In order to ensure a comprehensive experimental setup, all our experiments are carried out on both the training and test/validation splits of the aforesaid datasets. We make use of the publicly available divisions, working with the training and test splits for each. However, for the last two datasets, only the validation splits were publicly accessible with their labels. Considering our researchâ s emphasis on pinpointing data contamination with minimal dataset instances, the resource constraints, and our intention to facilitate the replication of this approach by other researchers, we randomly chose 10 instances from each split for our experiments. Setting: We use snapshots of GPT-3.5 and GPT-4 from June 13, 2023â speciï¬ cally gpt-3.5-turbo-0613 and gpt-4-0613â both accessed via the OpenAI API, as our founda- tion LLMs. To obtain deterministic results, we set the temperature to zero and capped the maximum completion length at 500 tokens. Contrarily, our comparative method (ChatGPT-Cheat?) uses the chat user interface (UI), which we also leveraged for conducting the experiment under this method. | 2308.08493#16 | 2308.08493#18 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#18 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Speciï¬ cally, we used the UI versions of GPT-4 and GPT-3.5 that were released on July 20, 2023. Human Evaluation: We undertake a human evaluation, led by two domain experts,7 to characterize contamination by identifying both exact matches and near-exact matches of individual instances. The term â exact matchesâ is self-explanatory; â near-exact matchesâ are completions by the LLM that, while not identical, show considerable overlap and maintain signiï¬ cant semantic and structural similarity to the reference instance. To generalize from individual instances to entire partitions, the human annotators followed the rule described in Algorithm 2 that was validated empirically in Section 3.3: a partition is ï¬ agged as contaminated if the instance-based evaluation identiï¬ es at least one exact match or at least two near-exact matches. Evaluation Metrics: In our analysis, the computation of the BLEURT score varies based on the structure of the dataset/instance, as this metric hinges on the ï¬ uency and quality of the generated se- quence. For single-instance datasets, where individual instances are randomly cut off mid-sentence and then completed by the LLM, we join the model-produced continuation to the severed reference instance and then calculate the BLEURT score. Conversely, for instances from paired-instance and multi-sentence single-instance datasets, the BLEURT score is computed solely for the newly pro- duced sequence. We highlight that our BLEURT score computations use the most recent checkpoint provided, i.e., BLEURT-20 (Pu et al. 2021). On the other hand, regardless of the dataset/instance type, the ROUGE-L score calculation exclusively pertains to the portions of the text ï¬ nished by the LLM. This is due to the scoreâ s dependency on statistical attributes rather than semantic consistency. Comparative Framework: We compare our proposed methods against the ChatGPT-Cheat? method (Sainz et al. 2023). Unlike our method, which uses a binary scale to determine contamina- tion, the comparison approach includes a â suspiciousâ category. This designation is invoked when the LLM, upon being asked to generate the ï¬ rst instances of a dataset split, outputs characteristic attributes such as data format, IDs, or other dataset-speciï¬ c details instead of the actual instances. | 2308.08493#17 | 2308.08493#19 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#19 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | If the model, on the other hand, fails to produce these characteristics, it is deemed uncontaminated. # 5 RESULTS AND DISCUSSION Table 3 lists the overall accuracy of our proposed methods in 28 distinct settings: two LLMs (GPT- 4 and GPT-3.5) Ã 14 dataset partitions coming from seven datasets. Table 4 provides a detailed breakdown of each method per dataset partition and the respective LLM. We draw the following observations from our experiments: (1) Algorithm 1, which hinges on the difference in average overlap scores between outputs from guided instruction and those from general instruction, performs well in the majority of settings. Its best performance is a success rate of 13/14 when using GPT-4 as the underlying model and 9/14 7The two annotators had almost perfect inter-rater agreement across all settings. This is due to the fact that a small subset of instances was used for contamination detection, and contamination is evident when it occurs. 7 Table 3: Overall accuracy at detecting contamination across 14 partitions for GPT-4 and GPT-3.5. The two LLMs are evaluated against human annotators. | 2308.08493#18 | 2308.08493#20 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#20 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | The â Success Rateâ shows how often each method matches human judgment, while the â Accuracyâ gives the corresponding percentages. GPT-4 GPT-3.5 Method Success Rate Accuracy Success Rate Accuracy Strict Eval.: ChatGPT-Cheat? Lenient Eval.: ChatGPT-Cheat? Algorithm 1: BLEURT Algorithm 1: ROUGE-L Algorithm 2: GPT-4 ICL 0/14 9/14 11/14 13/14 14/14 0.00% 64.29% 78.57% 92.86% 100.00% 11/14 13/14 9/14 7/14 13/14 78.57% 92.86% 64.29% 50.00% 92.86% when using GPT-3.5. We consider these results exciting given the algorithmâ s simplicity. However, Table 3 shows that: (a) its performance is not universally goodâ it performs at chance level when using ROUGE-L on GPT-3.5 outputs (7/14), and (b) its success rate varies depending on the metric in use (i.e., BLEURT or ROUGE-L). (2) In contrast, Algorithm 2, which relies on GPT-4 evaluation using the few-shot ICL prompt, aligns closely with human evaluations. | 2308.08493#19 | 2308.08493#21 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#21 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Speciï¬ cally, in experiments run on GPT-4 and GPT-3.5, its success rates are 14/14 and 13/14, respectively. These accuracies are higher than any produced by Algorithm 1 and maintain consistency across all the settings with the two LLMs. (3) Upon assessing the results of ChatGPT-Cheat? method, we discover that this method invariably labels partitions as suspiciousâ likely due to the precaution against generating copyrighted content which is activated by safety ï¬ ltersâ for all scenarios involving GPT-4. Given this, we interpret the outcomes of this method through two lenses: strict and lenient evaluation. In the strict evaluation, we do not interpret the suspicious label as contaminated or uncontaminated. Under this assessment, no partition is correctly classiï¬ ed according to human evaluation (0/14) in settings with GPT-4, and 11/14 in settings with GPT-3.5. In the lenient evaluation, we convert the suspicious label to either contaminated or uncontaminated in a way that maximizes the performance of this method. In this setting, the ChatGPT-Cheat? method correctly identiï¬ es 9/14 and 13/14 in settings with GPT-4 and GPT-3.5, respectively. However, this lenient evaluation is unrealistic due to the overï¬ tting in inter- preting the suspicious label. These ï¬ ndings support our observation that identifying contamination at the instance level, before extrapolating to the partition level, is a more resilient strategy. (4) Last but not least, the human evaluation reveals that the train and test/validation splits of both the AG News and WNLI datasets were included in GPT-4â s pre-training data. However, for IMDB and RTE, only the training partitions were incorporated, while for XSum, only the test split was leaked. For GPT-3.5, the only data exposure was the test partition of the XSum dataset. | 2308.08493#20 | 2308.08493#22 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#22 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | These ï¬ ndings conï¬ rm that, despite their creatorsâ efforts, todayâ s LLMs have ingested NLP datasets. We hope that this observation informs the design of better scientiï¬ c experiments with LLMs in the NLP space in the future. # 6 CONCLUSION We proposed a novel method to detect data contamination in LLMs, assuming no access to their pre-training data. Our approach begins by pinpointing data contamination at the instance level. This was achieved by prompting the LLM to produce the replica of the secondary segment of a dataset instance given its random-length initial segment, dataset name, and partition type, a process we called â guided instruction.â From here, we adopted a set of rules to generalize from instance-level to broader partition-level contamination. This involved leveraging statistically signiï¬ cant differ- ences from BLEURT and ROUGE-L scores between generated completions by guided and general instructions, as well as evaluations from GPT-4 with few-shot in-context learning prompting. Our evaluation spanned 28 different settings, including seven datasets along with their respective train and test/validation partitions and two LLMs: GPT-4 and GPT-3.5. Our ï¬ ndings indicated that while the replication technique via guided instruction is notably effective, the most accurate eval- uation approach that was closely aligned with human judgments for detecting data contamination 8 Table 4: An assessment of our proposed methods in contrast to ChatGPT-Cheat? method. We eval- uate Algorithm 1 using BLEURT and ROUGE-L, as well as Algorithm 2 which relies on GPT-4 decisions via few-shot ICL prompting. The evaluations are performed on 10 instances randomly drawn from each split of a particular dataset, with GPT-4 and GPT-3.5 serving as the LLMs that are investigated. Partition-level contamination is represented in the following ways: (1) While asterisks (*) indicate statistically signiï¬ cant differences between the completions produced by guided and general instructions (as measured by BLEURT and ROUGE-L), underlined numbers indicate set- tings that align with human evaluations (Algorithm 1). (2) A single tick (X) points to the presence of at least one exact match, while a double tick (XX) signals the identiï¬ cation of two or more near- exact matches (Algorithm 2). | 2308.08493#21 | 2308.08493#23 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#23 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | A cross sign (à ) denotes that neither of the aforementioned conditions were met. For the ChatGPT-Cheat? method, this cross sign indicates that the modelâ s output does not contain any speciï¬ c information about the ï¬ rst instances of the dataset partition upon the request to generate them. For the same method, the question mark (?) highlights partitions that are deemed suspicious. # Datasets IMDB AG News Yelp RTE WNLI SAMSum XSum Split Instruct. Alg. 1: BLEURT 0.47 0.43 0.54 0.41 *0.60 *0.62 0.41 0.50 0.50 0.38 *0.53 *0.65 0.63 *0.70 0.62 *0.72 0.58 0.58 0.58 0.59 General Guided General Guided 0.43 0.48 0.43 0.42 Train Test/Valid 0.54 0.60 0.64 0.67 Alg. 1: ROUGE-L 0.17 *0.35 0.16 *0.37 0.13 0.14 0.12 0.15 0.26 0.15 0.41 0.17 *0.51 *0.59 0.15 0.31 0.36 0.16 0.34 *0.63 0.14 *0.24 0.16 0.16 General Guided General Guided Train Test/Valid 0.18 *0.38 0.23 *0.38 Alg. 2: GPT-4 ICL X à X XX à XX X X à à Guided Train Test/Valid Guided à à à X ChatGPT-Cheat? Train Guided Test/Valid Guided ? ? ? ? ? ? ? ? ? ? ? ? ? ? Human Evaluation X à X XX à XX X X à à Train Guided Test/Valid Guided à à à X Alg. 1: | 2308.08493#22 | 2308.08493#24 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#24 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | BLEURT 0.59 0.58 0.58 0.59 0.45 0.50 0.49 0.42 0.50 *0.56 0.47 0.42 0.47 0.40 *0.53 *0.54 General Guided General Guided 0.58 *0.64 0.60 0.62 0.45 0.39 0.45 0.43 Train Test/Valid 0.54 0.56 0.62 0.62 Alg. 1: ROUGE-L 0.12 0.12 0.13 0.14 General Guided General Guided 0.10 0.11 0.13 0.17 0.06 *0.16 0.10 *0.20 0.13 0.37 0.29 *0.16 0.32 *0.43 0.11 0.23 0.32 *0.14 0.31 *0.42 Train Test/Valid 0.14 0.22 0.18 0.23 Alg. 2: GPT-4 ICL Train Guided Test/Valid Guided à à à à à à à à à à à à à à ChatGPT-Cheat? Train Guided Test/Valid Guided à à à à à à à à ? ? à à à à Human Evaluation Train Guided Test/Valid Guided à à à à à à à à à à à à à XX | 2308.08493#23 | 2308.08493#25 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#25 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | was the few-shot in-context learning prompt with GPT-4, which integrates a few example instances from human assessments in the input prompt. This method yielded a success rate in pinpointing data contamination across 14/14 scenarios for GPT-4 and 13/14 for GPT-3.5. 9 # REFERENCES Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, and Pascale Fung. A multitask, mul- tilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity, 2023. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. The ï¬ fth pascal recognizing textual entailment challenge. TAC, 7:8, 2009. | 2308.08493#24 | 2308.08493#26 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#26 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle Oâ Brien, Eric Hal- lahan, Mohammad Aï¬ ah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. Pythia: A suite for analyzing large language models across training and scaling, 2023. Sebastian Bordt and Ulrike von Luxburg. Chatgpt participates in a computer science exam. ArXiv, abs/2303.09461, 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhari- wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1877â 1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf. S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. Sparks of artiï¬ cial general intelligence: Early experiments with gpt-4, 2023. | 2308.08493#25 | 2308.08493#27 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#27 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and Colin Raffel. Extracting training data from large language models, 2021. Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. Quantifying memorization across neural language models, 2023. Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, and Xing Xie. A survey on evaluation of large language models, 2023. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. | 2308.08493#26 | 2308.08493#28 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#28 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pel- lat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-ï¬ netuned language models, 2022. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training veriï¬ ers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment challenge. In Machine learning challenges workshop, pp. 177â 190. Springer, 2005. 10 Nan Du, Yanping Huang, Andrew M Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, et al. | 2308.08493#27 | 2308.08493#29 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#29 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Glam: Efï¬ cient scaling of language models with mixture-of-experts. In International Conference on Machine Learning, pp. 5547â 5569. PMLR, 2022. B. Efron. Bootstrap Methods: 7(1):1 â 26, Another Look at the Jackknife. nals of Statistics, https://doi.org/10.1214/aos/1176344552. 1979. doi: 10.1214/aos/1176344552. The An- URL | 2308.08493#28 | 2308.08493#30 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#30 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Bradley Efron. Second Thoughts on the Bootstrap. Statistical Science, 18(2):135 â 140, 2003. doi: 10.1214/ss/1063994968. URL https://doi.org/10.1214/ss/1063994968. Bradley Efron and Robert J. Tibshirani. An Introduction to the Bootstrap. Number 57 in Monographs on Statistics and Applied Probability. Chapman & Hall/CRC, Boca Raton, Florida, USA, 1993. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entail- ment and Paraphrasing, pp. 1â 9, Prague, June 2007. Association for Computational Linguistics. URL https://aclanthology.org/W07-1401. SAMSum corpus: Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. In Proceedings of the A human-annotated dialogue dataset for abstractive summarization. 2nd Workshop on New Frontiers in Summarization, pp. 70â 79, Hong Kong, China, Novem- ber 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-5409. URL https://www.aclweb.org/anthology/D19-5409. R Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan In Proceedings of the Szpektor. The second pascal recognising textual entailment challenge. Second PASCAL Challenges Workshop on Recognising Textual Entailment, volume 7, pp. 785â 794, 2006. Nikhil Kandpal, Eric Wallace, and Colin Raffel. Deduplicating training data mitigates privacy risks in language models, 2022. Andreas K¨opf, Yannic Kilcher, Dimitri von R¨utte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Rich´ard Nagyï¬ | 2308.08493#29 | 2308.08493#31 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#31 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | , Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. Openassistant conversations â democratizing large language model align- ment, 2023. Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In Thir- teenth international conference on the principles of knowledge representation and reasoning, 2012. Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pp. 74â | 2308.08493#30 | 2308.08493#32 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#32 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | 81, Barcelona, Spain, July 2004. Association for Computational Linguis- tics. URL https://aclanthology.org/W04-1013. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christo- In Proceedings of the 49th Annual pher Potts. Learning word vectors for sentiment analysis. Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142â 150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/P11-1015. | 2308.08493#31 | 2308.08493#33 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#33 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Shashi Narayan, Shay B. Cohen, and Mirella Lapata. Donâ t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. ArXiv, abs/1808.08745, 2018. OpenAI. Gpt-4 technical report, 2023. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kel- ton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. | 2308.08493#32 | 2308.08493#34 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#34 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Training language models to follow instructions with human feedback, 2022. 11 Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The reï¬ nedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023. Amy Pu, Hyung Won Chung, Ankur P Parikh, Sebastian Gehrmann, and Thibault Sellam. Learning compact metrics for mt. In Proceedings of EMNLP, 2021. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language under- standing by generative pre-training. 2018. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. | 2308.08493#33 | 2308.08493#35 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#35 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Partha Pratim Ray. Chatgpt: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 2023. Yasaman Razeghi, Robert L. Logan IV au2, Matt Gardner, and Sameer Singh. Impact of pretraining term frequencies on few-shot reasoning, 2022. Julen aniz, your https://hitz-zentroa.github.io/lm-contamination/blog/, 2023. cessed: 2023-07-06. Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chafï¬ n, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Tali Bers, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. | 2308.08493#34 | 2308.08493#36 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#36 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Multitask prompted training enables zero-shot task generalization, 2022. Thibault Sellam, Dipanjan Das, and Ankur Parikh. BLEURT: Learning robust met- the As- July 2020. Associ- URL rics for sociation for Computational Linguistics, pp. 7881â 7892, Online, ation for Computational Linguistics. https://aclanthology.org/2020.acl-main.704. text generation. In Proceedings of the 58th Annual Meeting of doi: 10.18653/v1/2020.acl-main.704. Introduction to the CoNLL-2003 shared In Proceedings of the Seventh Con- task: Language-independent named entity recognition. ference on Natural Language Learning at HLT-NAACL 2003, pp. 142â 147, 2003. URL https://www.aclweb.org/anthology/W03-0419. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Ar- mand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efï¬ cient foundation language models, 2023a. | 2308.08493#35 | 2308.08493#37 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#37 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and ï¬ ne-tuned chat models, 2023b. 12 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2017. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceed- ings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353â | 2308.08493#36 | 2308.08493#38 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#38 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | 355, Brussels, Belgium, November 2018. Association for Computational Lin- guistics. doi: 10.18653/v1/W18-5446. URL https://aclanthology.org/W18-5446. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537, 2019. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners, 2022. Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. Ontonotes re- lease 5.0 ldc2013t19. Linguistic Data Consortium, Philadelphia, PA, 23:170, 2013. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, An- thony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Trans- In Proceedings of the 2020 Con- formers: State-of-the-art natural language processing. ference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38â 45, Online, October 2020. Association for Computational Linguistics. | 2308.08493#37 | 2308.08493#39 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#39 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | URL https://www.aclweb.org/anthology/2020.emnlp-demos.6. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. Character-level convolutional networks for text classiï¬ cation. In NIPS, 2015. Yiming Zhu, Peixian Zhang, Ehsan ul Haq, Pan Hui, and Gareth Tyson. Can chatgpt reproduce human-generated labels? a study of social computing tasks. ArXiv, abs/2304.10145, 2023. 13 # Appendices # A LIST OF ALL GUIDED AND GENERAL INSTRUCTIONS Table 5 presents a thorough collection of all the guided and general instructions employed through- out our study. Table 5: A comprehensive list of all guided and general instructions used in our experiments. Placeholders include: {split name} for the partition (or split) name; {dataset name} for the dataset name; {input} for the ï¬ rst part of the dataset instance cut at the tail randomly or the whole ï¬ rst sentence in NLI-based datasets; and {label} for the corresponding label of the incom- plete input instance. # Task # Guided Instruction # General Instruction Classiï¬ cation (single-instance) Instruction: You are provided with the ï¬ rst piece of an instance from the {split name} split of the {dataset name} dataset. Finish the second piece of the instance as exactly appeared in the dataset. Only rely on the original form of the instance in the dataset to ï¬ nish the second piece. Label: {label} First Piece: {input} Second Piece: Instruction: Finish the second piece based on the ï¬ rst piece, such that these two pieces become a single in- stance with the following label. Label: {label} First Piece: {input} Second Piece: NLI (paired-instance) Instruction: You are provided with Sentence 1 from the {split name} split of the {dataset name} dataset. Finish Sentence 2 as appeared in the dataset. Sentence 2 must ex- actly match the instance in the dataset. Sentence 1: {input} Label: {label} Sentence 2: Instruction: Finish Sentence 2 based on Sentence 1, such that the following label shows the logical relationship between Sentence 1 and Sentence 2. | 2308.08493#38 | 2308.08493#40 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#40 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Sentence 1: {input} Label: {label} Sentence 2: Summarization (single-instance) Instruction: You are provided with the ï¬ rst piece of a summary from the {split name} split of the {dataset name} dataset. Finish the second piece of the summary as exactly appeared in the dataset. Only rely on the original form of the sum- mary in the dataset to ï¬ nish the sec- ond piece. First Piece: {input} Second Piece: Instruction: Finish the second piece based on the ï¬ rst piece, such that these two pieces become a single summary. First Piece: {input} Second Piece: One-sentence Summary (single-instance) Instruction: You are provided with the ï¬ rst piece of a one-sentence sum- mary from the {split name} split of the {dataset name} dataset. Finish the second piece of the sum- mary as exactly appeared in the dataset. Only rely on the original form of the summary in the dataset to ï¬ nish the second piece. First Piece: {input} Second Piece: Instruction: Finish the second piece based on the ï¬ rst piece, such that these two pieces become a single one- sentence summary. First Piece: {input} Second Piece: | 2308.08493#39 | 2308.08493#41 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#41 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | 14 B FEW-SHOT IN-CONTEXT LEARNING PROMPT Figure 3 showcases the few-shot ICL prompt employed to evaluate the model-generated candidate against the reference text using GPT-4. Within this prompt, we present GPT-4 with one exact match and three exemplary instances of near-exact matches, all pre-labeled by human evaluation. These examples guide GPT-4 in discerning the difference between near-exact and inexact matches, in line with human assessment. Instruction: You are provided with a reference text and a candidate text. The candidate text is a generated replica of the reference text. Your task is to determine if the candidate text is an exact or near-exact match of the reference text. Near-exact match candidates must precisely match the reference candidate in terms of sentence structure, overlap, and contextual similarity. Respond only with â Yesâ or â Noâ . - - - Example 1: Reference Text: | 2308.08493#40 | 2308.08493#42 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#42 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | The cat waited at the top. Candidate Text: The cat waited at the top. Answer: Yes (exact match) - - - Example 2: Reference Text: icy surface of Jupiterâ s largest moon, Ganymede. These irregular masses may be rock formations, supported by Ganymedeâ s icy shell for billions of years. Candidate Text: icy surface of Jupiterâ s largest moon, Ganymede. These irregular masses may be rock formations, supported by Ganymedeâ s icy shell for billions of years. This discovery supports the theory that Ganymede has a subsurface ocean. Scientists used gravity data from NASAâ s Galileo spacecraft to cre- ate a geophysical model of the interior of Ganymede. | 2308.08493#41 | 2308.08493#43 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#43 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Answer: Yes (near-exact match) - - - Example 3: Reference Text: 50th Anniversary of Normandy Landings lasts a year. Candidate Text: The 50th anniversary celebration of the ï¬ rst Normandy landing will last a year. Answer: Yes (near-exact match) - - - Example 4: Reference Text: Microsoftâ s Hotmail has raised its storage capacity to 250MB. Candidate Text: Microsoft has increased the storage capacity of its Hotmail e-mail service to 250MB. Answer: Yes (near-exact match) - - - Example 5: Reference Text: | 2308.08493#42 | 2308.08493#44 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#44 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Mount Olympus is in the center of the earth. Candidate Text: Mount Olympus is located at the center of the earth. Answer: Figure 3: A display of the few-shot ICL prompt utilized for instance-level data contamination detec- tion using GPT-4. In this illustration, examples 1 through 4 are part of the prompt, while example 5 is updated with a new input reference and candidate for evaluation, depending on whether there is an exact, near-exact, or inexact match. While Example 1 represents an exact match, the other examples display variations indicating near-exact matches: Example 2 reveals a scenario where the candidate text has substantial overlap with the reference but includes added details; Examples 3 and 4 highlight situations where the candidate text possesses both semantic and structural similarity to the reference text. # C ILLUSTRATIONS OF EXACT, NEAR-EXACT, AND INEXACT MATCHES Displayed in Table 6 are examples of exact, near-exact, and inexact replicas of the reference instance when guided instruction and GPT-4 are used. This table also includes computed metrics such as ROUGE-L, BLEURT, and results from human and GPT-4 few-shot ICL evaluations. In addition, Table 7 showcases comparative outcomes for the same examples using general instruction. | 2308.08493#43 | 2308.08493#45 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#45 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | 15 Table 6: Examples of exact, near-exact, and inexact matches along with their respective BLEURT and ROUGE-L scores, and judgments from GPT-4 few-shot ICL and human evaluations. These examples are generated by GPT-4, as the underlying language model. Metric/Method Reference Instance: Review: Bromwell High is a cartoon comedy. It ran at the same time as some other programs about school life, such as â Teachersâ . My 35 years in the teaching profession lead me to believe that Bromwell Highâ s satire is much closer to reality than is â Teachersâ . The scramble to survive ï¬ nancially, the insightful stu- dents who can see right through their pathetic teachersâ pomp, the pettiness of the whole situation, all remind me of the schools I knew and their students. When I saw the episode in which a student repeatedly tried to burn down the school, I immediately recalled ......... at .......... High. A classic line: INSPECTOR: Iâ m here to sack one of your teachers. STUDENT: Welcome to Bromwell High. I expect that many adults of my age think that Bromwell High is far fetched. | 2308.08493#44 | 2308.08493#46 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#46 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | What a pity that it isnâ t! Label: 1 (positive) Instruction: You are provided with the ï¬ rst piece of an instance from the train split of the IMDB dataset. Finish the second piece of the instance as exactly appeared in the dataset. Only rely on the original form of the instance in the dataset to ï¬ nish the second piece. Label: 1 (positive) First Piece: Bromwell High is a cartoon comedy. It ran at the same time as some other pro- grams about school life, such as â Teachersâ . My 35 years in the teaching profession lead me to believe that Bromwell Highâ s satire is much closer to reality than is â Teachersâ . BLEURT Score: 0.99 ROUGE-L Score: 1.00 GPT-4 ICL: Exact Match Human Eval.: Exact Match # Second Piece: | 2308.08493#45 | 2308.08493#47 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#47 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | The scramble to survive financially, the insightful students who can see right through their pathetic teachersâ pomp, the pettiness of the whole situation, all remind me of the schools I knew and their students. When I saw the episode in which a student repeatedly tried to burn down the school, I immediately recalled ......... at .......... High. A classic line: INSPECTOR: Iâ m here to sack one of your teachers. STUDENT: Welcome to Bromwell High. I expect that many adults of my age think that Bromwell High is far fetched. | 2308.08493#46 | 2308.08493#48 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#48 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | What a pity that it isnâ t! Reference Instance: Sentence 1: Although they were born on different planets, Oscar-winning actor Nicolas Cageâ s new son and Superman have something in common - both were named Kal-el. Sentence 2: Nicolas Cageâ s son is called Kal-el. Label: 1 (entailment) Instruction: You are provided with Sentence 1 from the train split of the RTE dataset. Finish Sentence 2 as appeared in the dataset. Sentence 2 must exactly match the instance in the dataset. BLEURT Score: 0.87 ROUGE-L Score: 0.82 GPT-4 ICL: Near-exact Match Human Eval.: Near-exact Match | Sentence 1: Although they were born on different planets, Oscar-winning actor Nicolas Cageâ s new son and Superman have something in common - both were named Kal-el. | 2308.08493#47 | 2308.08493#49 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#49 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | # Label: 1 (entailment) # Sentence 2: Nicolas Cageâ s new son is named Kal-el. Reference Instance: Summary: Kim is about to tell mom that Harry bought a new sofa, and he needs grey pillows. BLEURT Score: 0.48 ROUGE-L Score: 0.12 GPT-4 ICL: Inexact Match Human Eval.: Inexact Match Instruction: You are provided with the ï¬ rst piece of a summary from the test split of the SAM- Sum dataset. Finish the second piece of the summary as exactly appeared in the dataset. Only rely on the original form of the summary in the dataset to ï¬ nish the second piece. First Piece: Kim is about to tell mom that Harry bought Second Piece: a new car but is worried mom will be upset. Kim is advised to tell mom in a positive way, focusing on Harryâ s happiness. 16 Table 7: Completions generated by GPT-4 under general instruction for examples shown in Table 6. Metric # Reference Instance and Its Replica by General Instruction Reference Instance: Review: | 2308.08493#48 | 2308.08493#50 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#50 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Bromwell High is a cartoon comedy. It ran at the same time as some other programs about school life, such as â Teachersâ . My 35 years in the teaching profession lead me to believe that Bromwell Highâ s satire is much closer to reality than is â Teachersâ . The scramble to survive ï¬ nancially, the insightful stu- dents who can see right through their pathetic teachersâ pomp, the pettiness of the whole situation, all remind me of the schools I knew and their students. When I saw the episode in which a student repeatedly tried to burn down the school, I immediately recalled ......... at .......... High. A classic line: INSPECTOR: Iâ m here to sack one of your teachers. STUDENT: Welcome to Bromwell High. I expect that many adults of my age think that Bromwell High is far fetched. | 2308.08493#49 | 2308.08493#51 | 2308.08493 | [
"2110.14168"
]
|
2308.08493#51 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | What a pity that it isnâ t! Label: 1 (positive) Instruction: Finish the second piece based on the ï¬ rst piece, such that these two pieces become a single instance with the following label. Label: 1 (positive) First Piece: Bromwell High is a cartoon comedy. It ran at the same time as some other pro- grams about school life, such as â Teachersâ . My 35 years in the teaching profession lead me to believe that Bromwell Highâ s satire is much closer to reality than is â Teachersâ . # Second Piece: BLEURT Score: 0.18 ROUGE-L Score: 0.41 | 2308.08493#50 | 2308.08493#52 | 2308.08493 | [
"2110.14168"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.